I1014 22:56:56.497650 7 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1014 22:56:56.568250 7 e2e.go:129] Starting e2e run "02afe796-93df-403d-b7e6-808052deba20" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1602716215 - Will randomize all specs Will run 303 of 5232 specs Oct 14 22:56:56.626: INFO: >>> kubeConfig: /root/.kube/config Oct 14 22:56:56.631: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 14 22:56:56.653: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 14 22:56:56.683: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 14 22:56:56.683: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 14 22:56:56.683: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 14 22:56:56.690: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 14 22:56:56.690: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 14 22:56:56.690: INFO: e2e test version: v1.19.3-rc.0 Oct 14 22:56:56.691: INFO: kube-apiserver version: v1.19.0 Oct 14 22:56:56.691: INFO: >>> kubeConfig: /root/.kube/config Oct 14 22:56:56.697: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:56:56.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api Oct 14 22:56:56.802: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 14 22:56:56.828: INFO: Waiting up to 5m0s for pod "downward-api-40b4775d-81bd-4eda-abe5-490a756d9020" in namespace "downward-api-7805" to be "Succeeded or Failed" Oct 14 22:56:56.856: INFO: Pod "downward-api-40b4775d-81bd-4eda-abe5-490a756d9020": Phase="Pending", Reason="", readiness=false. Elapsed: 28.308622ms Oct 14 22:56:58.859: INFO: Pod "downward-api-40b4775d-81bd-4eda-abe5-490a756d9020": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031709802s Oct 14 22:57:00.864: INFO: Pod "downward-api-40b4775d-81bd-4eda-abe5-490a756d9020": Phase="Running", Reason="", readiness=true. Elapsed: 4.036223246s Oct 14 22:57:02.868: INFO: Pod "downward-api-40b4775d-81bd-4eda-abe5-490a756d9020": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040818785s STEP: Saw pod success Oct 14 22:57:02.869: INFO: Pod "downward-api-40b4775d-81bd-4eda-abe5-490a756d9020" satisfied condition "Succeeded or Failed" Oct 14 22:57:02.872: INFO: Trying to get logs from node leguer-worker pod downward-api-40b4775d-81bd-4eda-abe5-490a756d9020 container dapi-container: STEP: delete the pod Oct 14 22:57:02.998: INFO: Waiting for pod downward-api-40b4775d-81bd-4eda-abe5-490a756d9020 to disappear Oct 14 22:57:03.031: INFO: Pod downward-api-40b4775d-81bd-4eda-abe5-490a756d9020 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:57:03.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7805" for this suite. • [SLOW TEST:6.348 seconds] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":1,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:57:03.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:57:03.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1962" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":2,"skipped":114,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:57:03.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 22:57:03.401: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4095fa68-e02c-4147-ab41-01ea5b2134d9" in namespace "security-context-test-2555" to be "Succeeded or Failed" Oct 14 22:57:03.416: INFO: Pod "busybox-readonly-false-4095fa68-e02c-4147-ab41-01ea5b2134d9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.384564ms Oct 14 22:57:05.517: INFO: Pod "busybox-readonly-false-4095fa68-e02c-4147-ab41-01ea5b2134d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115373389s Oct 14 22:57:07.535: INFO: Pod "busybox-readonly-false-4095fa68-e02c-4147-ab41-01ea5b2134d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133351442s Oct 14 22:57:07.535: INFO: Pod "busybox-readonly-false-4095fa68-e02c-4147-ab41-01ea5b2134d9" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:57:07.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2555" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":3,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:57:07.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4911/configmap-test-26dfafbd-8c7f-45ba-90d8-5ef93cf6d414 STEP: Creating a pod to test consume configMaps Oct 14 22:57:07.913: INFO: Waiting up to 5m0s for pod "pod-configmaps-fba380f1-724e-4b14-b34b-61d508abe37f" in namespace "configmap-4911" to be "Succeeded or Failed" Oct 14 22:57:07.916: INFO: Pod "pod-configmaps-fba380f1-724e-4b14-b34b-61d508abe37f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.331967ms Oct 14 22:57:09.941: INFO: Pod "pod-configmaps-fba380f1-724e-4b14-b34b-61d508abe37f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02863164s Oct 14 22:57:12.020: INFO: Pod "pod-configmaps-fba380f1-724e-4b14-b34b-61d508abe37f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107007959s STEP: Saw pod success Oct 14 22:57:12.020: INFO: Pod "pod-configmaps-fba380f1-724e-4b14-b34b-61d508abe37f" satisfied condition "Succeeded or Failed" Oct 14 22:57:12.023: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-fba380f1-724e-4b14-b34b-61d508abe37f container env-test: STEP: delete the pod Oct 14 22:57:12.057: INFO: Waiting for pod pod-configmaps-fba380f1-724e-4b14-b34b-61d508abe37f to disappear Oct 14 22:57:12.066: INFO: Pod pod-configmaps-fba380f1-724e-4b14-b34b-61d508abe37f no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:57:12.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4911" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":4,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:57:12.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:57:29.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6637" for this suite. • [SLOW TEST:17.134 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":5,"skipped":180,"failed":0} SS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:57:29.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-0a7e1944-9fd7-45a3-bfda-d67223e61f40 STEP: Creating a pod to test consume secrets Oct 14 22:57:29.301: INFO: Waiting up to 5m0s for pod "pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9" in namespace "secrets-9875" to be "Succeeded or Failed" Oct 14 22:57:29.337: INFO: Pod "pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.261014ms Oct 14 22:57:31.342: INFO: Pod "pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040720327s Oct 14 22:57:33.346: INFO: Pod "pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.044664525s Oct 14 22:57:35.350: INFO: Pod "pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.049000739s STEP: Saw pod success Oct 14 22:57:35.350: INFO: Pod "pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9" satisfied condition "Succeeded or Failed" Oct 14 22:57:35.353: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9 container secret-env-test: STEP: delete the pod Oct 14 22:57:35.398: INFO: Waiting for pod pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9 to disappear Oct 14 22:57:35.426: INFO: Pod pod-secrets-684d4adb-c2b9-426b-8c55-e932cdd954f9 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:57:35.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9875" for this suite. • [SLOW TEST:6.224 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":6,"skipped":182,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:57:35.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 22:57:35.511: INFO: Creating ReplicaSet my-hostname-basic-e6666676-623e-425f-8659-8893d9106991 Oct 14 22:57:35.546: INFO: Pod name my-hostname-basic-e6666676-623e-425f-8659-8893d9106991: Found 0 pods out of 1 Oct 14 22:57:40.595: INFO: Pod name my-hostname-basic-e6666676-623e-425f-8659-8893d9106991: Found 1 pods out of 1 Oct 14 22:57:40.595: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e6666676-623e-425f-8659-8893d9106991" is running Oct 14 22:57:40.657: INFO: Pod "my-hostname-basic-e6666676-623e-425f-8659-8893d9106991-gfg76" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 22:57:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 22:57:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 22:57:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-14 22:57:35 +0000 UTC Reason: Message:}]) Oct 14 22:57:40.658: INFO: Trying to dial the pod Oct 14 22:57:45.670: INFO: Controller my-hostname-basic-e6666676-623e-425f-8659-8893d9106991: Got expected result from replica 1 [my-hostname-basic-e6666676-623e-425f-8659-8893d9106991-gfg76]: "my-hostname-basic-e6666676-623e-425f-8659-8893d9106991-gfg76", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:57:45.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5293" for this suite. • [SLOW TEST:10.244 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":7,"skipped":194,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:57:45.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 14 22:57:55.809: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 14 22:57:55.831: INFO: Pod pod-with-poststart-exec-hook still exists Oct 14 22:57:57.831: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 14 22:57:57.835: INFO: Pod pod-with-poststart-exec-hook still exists Oct 14 22:57:59.831: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 14 22:57:59.835: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:57:59.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5680" for this suite. • [SLOW TEST:14.165 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":199,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:57:59.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 14 22:58:04.475: INFO: Successfully updated pod "pod-update-activedeadlineseconds-9821995b-9377-4046-b959-5d39d1566369" Oct 14 22:58:04.475: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-9821995b-9377-4046-b959-5d39d1566369" in namespace "pods-8915" to be "terminated due to deadline exceeded" Oct 14 22:58:04.494: INFO: Pod "pod-update-activedeadlineseconds-9821995b-9377-4046-b959-5d39d1566369": Phase="Running", Reason="", readiness=true. Elapsed: 19.201492ms Oct 14 22:58:06.499: INFO: Pod "pod-update-activedeadlineseconds-9821995b-9377-4046-b959-5d39d1566369": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.023612461s Oct 14 22:58:06.499: INFO: Pod "pod-update-activedeadlineseconds-9821995b-9377-4046-b959-5d39d1566369" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:58:06.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8915" for this suite. • [SLOW TEST:6.666 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:58:06.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 22:58:06.875: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7372f53-1652-4b14-b60d-d69d2b032352" in namespace "downward-api-2474" to be "Succeeded or Failed" Oct 14 22:58:06.894: INFO: Pod "downwardapi-volume-e7372f53-1652-4b14-b60d-d69d2b032352": Phase="Pending", Reason="", readiness=false. Elapsed: 18.849313ms Oct 14 22:58:08.898: INFO: Pod "downwardapi-volume-e7372f53-1652-4b14-b60d-d69d2b032352": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022547404s Oct 14 22:58:10.903: INFO: Pod "downwardapi-volume-e7372f53-1652-4b14-b60d-d69d2b032352": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027309848s STEP: Saw pod success Oct 14 22:58:10.903: INFO: Pod "downwardapi-volume-e7372f53-1652-4b14-b60d-d69d2b032352" satisfied condition "Succeeded or Failed" Oct 14 22:58:10.905: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-e7372f53-1652-4b14-b60d-d69d2b032352 container client-container: STEP: delete the pod Oct 14 22:58:11.076: INFO: Waiting for pod downwardapi-volume-e7372f53-1652-4b14-b60d-d69d2b032352 to disappear Oct 14 22:58:11.083: INFO: Pod downwardapi-volume-e7372f53-1652-4b14-b60d-d69d2b032352 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:58:11.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2474" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":10,"skipped":267,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:58:11.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1014 22:58:12.208078 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 22:59:14.276: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:59:14.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8000" for this suite. • [SLOW TEST:63.192 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":11,"skipped":273,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:59:14.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 22:59:15.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 22:59:17.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313155, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313155, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313155, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313155, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 22:59:19.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313155, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313155, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313155, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313155, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 22:59:22.183: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:59:22.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5971" for this suite. STEP: Destroying namespace "webhook-5971-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.389 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":12,"skipped":317,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:59:22.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 22:59:23.605: INFO: Checking APIGroup: apiregistration.k8s.io Oct 14 22:59:23.606: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 14 22:59:23.606: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.606: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 14 22:59:23.606: INFO: Checking APIGroup: extensions Oct 14 22:59:23.607: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 14 22:59:23.607: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 14 22:59:23.607: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 14 22:59:23.607: INFO: Checking APIGroup: apps Oct 14 22:59:23.608: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 14 22:59:23.608: INFO: Versions found [{apps/v1 v1}] Oct 14 22:59:23.608: INFO: apps/v1 matches apps/v1 Oct 14 22:59:23.608: INFO: Checking APIGroup: events.k8s.io Oct 14 22:59:23.609: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 14 22:59:23.609: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.609: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 14 22:59:23.609: INFO: Checking APIGroup: authentication.k8s.io Oct 14 22:59:23.610: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 14 22:59:23.610: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.610: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 14 22:59:23.610: INFO: Checking APIGroup: authorization.k8s.io Oct 14 22:59:23.611: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 14 22:59:23.611: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.611: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 14 22:59:23.611: INFO: Checking APIGroup: autoscaling Oct 14 22:59:23.612: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 14 22:59:23.612: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 14 22:59:23.612: INFO: autoscaling/v1 matches autoscaling/v1 Oct 14 22:59:23.612: INFO: Checking APIGroup: batch Oct 14 22:59:23.613: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 14 22:59:23.613: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 14 22:59:23.613: INFO: batch/v1 matches batch/v1 Oct 14 22:59:23.613: INFO: Checking APIGroup: certificates.k8s.io Oct 14 22:59:23.614: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 14 22:59:23.614: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.614: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 14 22:59:23.614: INFO: Checking APIGroup: networking.k8s.io Oct 14 22:59:23.615: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 14 22:59:23.615: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.615: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 14 22:59:23.615: INFO: Checking APIGroup: policy Oct 14 22:59:23.616: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Oct 14 22:59:23.616: INFO: Versions found [{policy/v1beta1 v1beta1}] Oct 14 22:59:23.616: INFO: policy/v1beta1 matches policy/v1beta1 Oct 14 22:59:23.616: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 14 22:59:23.617: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 14 22:59:23.617: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.617: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 14 22:59:23.617: INFO: Checking APIGroup: storage.k8s.io Oct 14 22:59:23.618: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 14 22:59:23.618: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.618: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 14 22:59:23.618: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 14 22:59:23.619: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 14 22:59:23.619: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.619: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 14 22:59:23.619: INFO: Checking APIGroup: apiextensions.k8s.io Oct 14 22:59:23.620: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 14 22:59:23.620: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.620: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 14 22:59:23.620: INFO: Checking APIGroup: scheduling.k8s.io Oct 14 22:59:23.621: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 14 22:59:23.621: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.621: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 14 22:59:23.621: INFO: Checking APIGroup: coordination.k8s.io Oct 14 22:59:23.622: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 14 22:59:23.622: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.622: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 14 22:59:23.622: INFO: Checking APIGroup: node.k8s.io Oct 14 22:59:23.623: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Oct 14 22:59:23.623: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.623: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Oct 14 22:59:23.623: INFO: Checking APIGroup: discovery.k8s.io Oct 14 22:59:23.624: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Oct 14 22:59:23.624: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Oct 14 22:59:23.624: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:59:23.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-6879" for this suite. •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":13,"skipped":332,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:59:23.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-af391135-a6b9-459c-9750-6e986c51a213 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:59:23.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-819" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":14,"skipped":340,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:59:23.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6174 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 14 22:59:24.001: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 14 22:59:24.072: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 22:59:26.076: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 22:59:28.262: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 22:59:30.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 22:59:32.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 22:59:34.076: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 22:59:36.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 22:59:38.077: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 22:59:40.076: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 14 22:59:40.082: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 14 22:59:42.087: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 14 22:59:46.111: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.158:8080/dial?request=hostname&protocol=udp&host=10.244.2.122&port=8081&tries=1'] Namespace:pod-network-test-6174 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 22:59:46.111: INFO: >>> kubeConfig: /root/.kube/config I1014 22:59:46.147890 7 log.go:181] (0xc0002a5a20) (0xc000970320) Create stream I1014 22:59:46.147924 7 log.go:181] (0xc0002a5a20) (0xc000970320) Stream added, broadcasting: 1 I1014 22:59:46.150996 7 log.go:181] (0xc0002a5a20) Reply frame received for 1 I1014 22:59:46.151047 7 log.go:181] (0xc0002a5a20) (0xc000970460) Create stream I1014 22:59:46.151060 7 log.go:181] (0xc0002a5a20) (0xc000970460) Stream added, broadcasting: 3 I1014 22:59:46.152037 7 log.go:181] (0xc0002a5a20) Reply frame received for 3 I1014 22:59:46.152088 7 log.go:181] (0xc0002a5a20) (0xc0009705a0) Create stream I1014 22:59:46.152105 7 log.go:181] (0xc0002a5a20) (0xc0009705a0) Stream added, broadcasting: 5 I1014 22:59:46.153250 7 log.go:181] (0xc0002a5a20) Reply frame received for 5 I1014 22:59:46.213024 7 log.go:181] (0xc0002a5a20) Data frame received for 3 I1014 22:59:46.213090 7 log.go:181] (0xc000970460) (3) Data frame handling I1014 22:59:46.213113 7 log.go:181] (0xc000970460) (3) Data frame sent I1014 22:59:46.213611 7 log.go:181] (0xc0002a5a20) Data frame received for 5 I1014 22:59:46.213640 7 log.go:181] (0xc0009705a0) (5) Data frame handling I1014 22:59:46.213663 7 log.go:181] (0xc0002a5a20) Data frame received for 3 I1014 22:59:46.213682 7 log.go:181] (0xc000970460) (3) Data frame handling I1014 22:59:46.215718 7 log.go:181] (0xc0002a5a20) Data frame received for 1 I1014 22:59:46.215748 7 log.go:181] (0xc000970320) (1) Data frame handling I1014 22:59:46.215767 7 log.go:181] (0xc000970320) (1) Data frame sent I1014 22:59:46.215793 7 log.go:181] (0xc0002a5a20) (0xc000970320) Stream removed, broadcasting: 1 I1014 22:59:46.215844 7 log.go:181] (0xc0002a5a20) Go away received I1014 22:59:46.216257 7 log.go:181] (0xc0002a5a20) (0xc000970320) Stream removed, broadcasting: 1 I1014 22:59:46.216292 7 log.go:181] (0xc0002a5a20) (0xc000970460) Stream removed, broadcasting: 3 I1014 22:59:46.216317 7 log.go:181] (0xc0002a5a20) (0xc0009705a0) Stream removed, broadcasting: 5 Oct 14 22:59:46.216: INFO: Waiting for responses: map[] Oct 14 22:59:46.219: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.158:8080/dial?request=hostname&protocol=udp&host=10.244.1.157&port=8081&tries=1'] Namespace:pod-network-test-6174 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 22:59:46.219: INFO: >>> kubeConfig: /root/.kube/config I1014 22:59:46.254780 7 log.go:181] (0xc003436580) (0xc000d310e0) Create stream I1014 22:59:46.254813 7 log.go:181] (0xc003436580) (0xc000d310e0) Stream added, broadcasting: 1 I1014 22:59:46.257966 7 log.go:181] (0xc003436580) Reply frame received for 1 I1014 22:59:46.258001 7 log.go:181] (0xc003436580) (0xc000d31180) Create stream I1014 22:59:46.258013 7 log.go:181] (0xc003436580) (0xc000d31180) Stream added, broadcasting: 3 I1014 22:59:46.258940 7 log.go:181] (0xc003436580) Reply frame received for 3 I1014 22:59:46.258986 7 log.go:181] (0xc003436580) (0xc003cd1b80) Create stream I1014 22:59:46.259002 7 log.go:181] (0xc003436580) (0xc003cd1b80) Stream added, broadcasting: 5 I1014 22:59:46.259876 7 log.go:181] (0xc003436580) Reply frame received for 5 I1014 22:59:46.330721 7 log.go:181] (0xc003436580) Data frame received for 3 I1014 22:59:46.330745 7 log.go:181] (0xc000d31180) (3) Data frame handling I1014 22:59:46.330761 7 log.go:181] (0xc000d31180) (3) Data frame sent I1014 22:59:46.331518 7 log.go:181] (0xc003436580) Data frame received for 5 I1014 22:59:46.331584 7 log.go:181] (0xc003436580) Data frame received for 3 I1014 22:59:46.331625 7 log.go:181] (0xc000d31180) (3) Data frame handling I1014 22:59:46.331656 7 log.go:181] (0xc003cd1b80) (5) Data frame handling I1014 22:59:46.332800 7 log.go:181] (0xc003436580) Data frame received for 1 I1014 22:59:46.332816 7 log.go:181] (0xc000d310e0) (1) Data frame handling I1014 22:59:46.332898 7 log.go:181] (0xc000d310e0) (1) Data frame sent I1014 22:59:46.332920 7 log.go:181] (0xc003436580) (0xc000d310e0) Stream removed, broadcasting: 1 I1014 22:59:46.332931 7 log.go:181] (0xc003436580) Go away received I1014 22:59:46.333046 7 log.go:181] (0xc003436580) (0xc000d310e0) Stream removed, broadcasting: 1 I1014 22:59:46.333064 7 log.go:181] (0xc003436580) (0xc000d31180) Stream removed, broadcasting: 3 I1014 22:59:46.333072 7 log.go:181] (0xc003436580) (0xc003cd1b80) Stream removed, broadcasting: 5 Oct 14 22:59:46.333: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:59:46.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6174" for this suite. • [SLOW TEST:22.485 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":15,"skipped":365,"failed":0} [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:59:46.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 22:59:46.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e293932a-8465-46e0-984d-1e308e48ddb4" in namespace "projected-1704" to be "Succeeded or Failed" Oct 14 22:59:46.466: INFO: Pod "downwardapi-volume-e293932a-8465-46e0-984d-1e308e48ddb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.490992ms Oct 14 22:59:48.471: INFO: Pod "downwardapi-volume-e293932a-8465-46e0-984d-1e308e48ddb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007381511s Oct 14 22:59:50.475: INFO: Pod "downwardapi-volume-e293932a-8465-46e0-984d-1e308e48ddb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011311409s STEP: Saw pod success Oct 14 22:59:50.475: INFO: Pod "downwardapi-volume-e293932a-8465-46e0-984d-1e308e48ddb4" satisfied condition "Succeeded or Failed" Oct 14 22:59:50.478: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-e293932a-8465-46e0-984d-1e308e48ddb4 container client-container: STEP: delete the pod Oct 14 22:59:50.544: INFO: Waiting for pod downwardapi-volume-e293932a-8465-46e0-984d-1e308e48ddb4 to disappear Oct 14 22:59:50.554: INFO: Pod downwardapi-volume-e293932a-8465-46e0-984d-1e308e48ddb4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:59:50.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1704" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":16,"skipped":365,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:59:50.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 22:59:50.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278" in namespace "downward-api-6224" to be "Succeeded or Failed" Oct 14 22:59:50.698: INFO: Pod "downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278": Phase="Pending", Reason="", readiness=false. Elapsed: 57.637491ms Oct 14 22:59:52.993: INFO: Pod "downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351955608s Oct 14 22:59:54.997: INFO: Pod "downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278": Phase="Pending", Reason="", readiness=false. Elapsed: 4.356054182s Oct 14 22:59:57.001: INFO: Pod "downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.359998711s STEP: Saw pod success Oct 14 22:59:57.001: INFO: Pod "downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278" satisfied condition "Succeeded or Failed" Oct 14 22:59:57.003: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278 container client-container: STEP: delete the pod Oct 14 22:59:57.090: INFO: Waiting for pod downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278 to disappear Oct 14 22:59:57.095: INFO: Pod downwardapi-volume-aae32797-4b9e-478d-ab68-5c6f5689a278 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 22:59:57.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6224" for this suite. • [SLOW TEST:6.542 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":17,"skipped":368,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 22:59:57.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-6f91c322-1184-4eb3-8c88-04b05a5fca1d STEP: Creating a pod to test consume configMaps Oct 14 22:59:57.186: INFO: Waiting up to 5m0s for pod "pod-configmaps-4325f91b-063e-47b0-9fc7-c6938418e5b6" in namespace "configmap-579" to be "Succeeded or Failed" Oct 14 22:59:57.225: INFO: Pod "pod-configmaps-4325f91b-063e-47b0-9fc7-c6938418e5b6": Phase="Pending", Reason="", readiness=false. Elapsed: 39.638076ms Oct 14 22:59:59.230: INFO: Pod "pod-configmaps-4325f91b-063e-47b0-9fc7-c6938418e5b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044213185s Oct 14 23:00:01.235: INFO: Pod "pod-configmaps-4325f91b-063e-47b0-9fc7-c6938418e5b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049183568s STEP: Saw pod success Oct 14 23:00:01.235: INFO: Pod "pod-configmaps-4325f91b-063e-47b0-9fc7-c6938418e5b6" satisfied condition "Succeeded or Failed" Oct 14 23:00:01.238: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-4325f91b-063e-47b0-9fc7-c6938418e5b6 container configmap-volume-test: STEP: delete the pod Oct 14 23:00:01.341: INFO: Waiting for pod pod-configmaps-4325f91b-063e-47b0-9fc7-c6938418e5b6 to disappear Oct 14 23:00:01.351: INFO: Pod pod-configmaps-4325f91b-063e-47b0-9fc7-c6938418e5b6 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:00:01.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-579" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":18,"skipped":406,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:00:01.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 14 23:00:01.438: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-757 /api/v1/namespaces/watch-757/configmaps/e2e-watch-test-watch-closed cd4cff91-1112-46bd-aa81-05f0dab1c075 2941155 0 2020-10-14 23:00:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-14 23:00:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 23:00:01.438: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-757 /api/v1/namespaces/watch-757/configmaps/e2e-watch-test-watch-closed cd4cff91-1112-46bd-aa81-05f0dab1c075 2941156 0 2020-10-14 23:00:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-14 23:00:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 14 23:00:01.466: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-757 /api/v1/namespaces/watch-757/configmaps/e2e-watch-test-watch-closed cd4cff91-1112-46bd-aa81-05f0dab1c075 2941157 0 2020-10-14 23:00:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-14 23:00:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 23:00:01.467: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-757 /api/v1/namespaces/watch-757/configmaps/e2e-watch-test-watch-closed cd4cff91-1112-46bd-aa81-05f0dab1c075 2941158 0 2020-10-14 23:00:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-14 23:00:01 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:00:01.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-757" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":19,"skipped":412,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:00:01.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:02:01.679: INFO: Deleting pod "var-expansion-6dcb6687-c4c8-4706-9069-61f735abd6cd" in namespace "var-expansion-105" Oct 14 23:02:01.684: INFO: Wait up to 5m0s for pod "var-expansion-6dcb6687-c4c8-4706-9069-61f735abd6cd" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:02:05.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-105" for this suite. • [SLOW TEST:124.258 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":20,"skipped":420,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:02:05.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:02:05.921: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63776255-5139-4935-8993-288743ef3434" in namespace "downward-api-4267" to be "Succeeded or Failed" Oct 14 23:02:06.066: INFO: Pod "downwardapi-volume-63776255-5139-4935-8993-288743ef3434": Phase="Pending", Reason="", readiness=false. Elapsed: 144.755926ms Oct 14 23:02:08.070: INFO: Pod "downwardapi-volume-63776255-5139-4935-8993-288743ef3434": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149006221s Oct 14 23:02:10.074: INFO: Pod "downwardapi-volume-63776255-5139-4935-8993-288743ef3434": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152798687s STEP: Saw pod success Oct 14 23:02:10.074: INFO: Pod "downwardapi-volume-63776255-5139-4935-8993-288743ef3434" satisfied condition "Succeeded or Failed" Oct 14 23:02:10.077: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-63776255-5139-4935-8993-288743ef3434 container client-container: STEP: delete the pod Oct 14 23:02:10.144: INFO: Waiting for pod downwardapi-volume-63776255-5139-4935-8993-288743ef3434 to disappear Oct 14 23:02:10.151: INFO: Pod downwardapi-volume-63776255-5139-4935-8993-288743ef3434 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:02:10.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4267" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":21,"skipped":430,"failed":0} S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:02:10.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:02:10.447: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-9274 I1014 23:02:10.675708 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9274, replica count: 1 I1014 23:02:11.726110 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:02:12.726340 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:02:13.726597 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:02:14.726781 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 23:02:14.868: INFO: Created: latency-svc-65mpn Oct 14 23:02:14.889: INFO: Got endpoints: latency-svc-65mpn [62.592703ms] Oct 14 23:02:14.976: INFO: Created: latency-svc-qp7zn Oct 14 23:02:14.980: INFO: Got endpoints: latency-svc-qp7zn [90.339123ms] Oct 14 23:02:15.012: INFO: Created: latency-svc-95ff8 Oct 14 23:02:15.059: INFO: Got endpoints: latency-svc-95ff8 [169.125948ms] Oct 14 23:02:15.107: INFO: Created: latency-svc-kbg8h Oct 14 23:02:15.113: INFO: Got endpoints: latency-svc-kbg8h [222.76799ms] Oct 14 23:02:15.168: INFO: Created: latency-svc-r7b8k Oct 14 23:02:15.184: INFO: Got endpoints: latency-svc-r7b8k [294.755463ms] Oct 14 23:02:15.233: INFO: Created: latency-svc-dh9rb Oct 14 23:02:15.263: INFO: Got endpoints: latency-svc-dh9rb [373.516395ms] Oct 14 23:02:15.264: INFO: Created: latency-svc-rlg9q Oct 14 23:02:15.287: INFO: Got endpoints: latency-svc-rlg9q [396.609731ms] Oct 14 23:02:15.324: INFO: Created: latency-svc-h5x86 Oct 14 23:02:15.378: INFO: Got endpoints: latency-svc-h5x86 [487.860274ms] Oct 14 23:02:15.413: INFO: Created: latency-svc-w7c7r Oct 14 23:02:15.431: INFO: Got endpoints: latency-svc-w7c7r [541.202035ms] Oct 14 23:02:15.521: INFO: Created: latency-svc-pwchh Oct 14 23:02:15.525: INFO: Got endpoints: latency-svc-pwchh [635.341465ms] Oct 14 23:02:15.569: INFO: Created: latency-svc-psz9c Oct 14 23:02:15.587: INFO: Got endpoints: latency-svc-psz9c [698.167783ms] Oct 14 23:02:15.611: INFO: Created: latency-svc-lhjxf Oct 14 23:02:15.652: INFO: Got endpoints: latency-svc-lhjxf [762.32372ms] Oct 14 23:02:15.714: INFO: Created: latency-svc-7fwkq Oct 14 23:02:15.738: INFO: Got endpoints: latency-svc-7fwkq [848.209589ms] Oct 14 23:02:15.822: INFO: Created: latency-svc-mqbs4 Oct 14 23:02:15.834: INFO: Got endpoints: latency-svc-mqbs4 [944.261978ms] Oct 14 23:02:15.882: INFO: Created: latency-svc-hbv8s Oct 14 23:02:15.894: INFO: Got endpoints: latency-svc-hbv8s [1.004718547s] Oct 14 23:02:15.934: INFO: Created: latency-svc-27pnm Oct 14 23:02:15.937: INFO: Got endpoints: latency-svc-27pnm [1.047296667s] Oct 14 23:02:15.971: INFO: Created: latency-svc-nwltr Oct 14 23:02:15.985: INFO: Got endpoints: latency-svc-nwltr [1.005006515s] Oct 14 23:02:16.019: INFO: Created: latency-svc-j25r2 Oct 14 23:02:16.033: INFO: Got endpoints: latency-svc-j25r2 [974.180951ms] Oct 14 23:02:16.098: INFO: Created: latency-svc-cd8dn Oct 14 23:02:16.112: INFO: Got endpoints: latency-svc-cd8dn [999.247723ms] Oct 14 23:02:16.169: INFO: Created: latency-svc-8pqcp Oct 14 23:02:16.209: INFO: Got endpoints: latency-svc-8pqcp [1.025328547s] Oct 14 23:02:16.248: INFO: Created: latency-svc-9zs6b Oct 14 23:02:16.268: INFO: Got endpoints: latency-svc-9zs6b [1.005406457s] Oct 14 23:02:16.351: INFO: Created: latency-svc-8mjbp Oct 14 23:02:16.364: INFO: Got endpoints: latency-svc-8mjbp [1.077089127s] Oct 14 23:02:16.391: INFO: Created: latency-svc-5h2bx Oct 14 23:02:16.400: INFO: Got endpoints: latency-svc-5h2bx [1.022366594s] Oct 14 23:02:16.479: INFO: Created: latency-svc-h2rj4 Oct 14 23:02:16.484: INFO: Got endpoints: latency-svc-h2rj4 [1.053203313s] Oct 14 23:02:16.512: INFO: Created: latency-svc-qzv82 Oct 14 23:02:16.527: INFO: Got endpoints: latency-svc-qzv82 [1.001656039s] Oct 14 23:02:16.641: INFO: Created: latency-svc-p6jb4 Oct 14 23:02:16.648: INFO: Got endpoints: latency-svc-p6jb4 [1.060200315s] Oct 14 23:02:16.698: INFO: Created: latency-svc-n4khb Oct 14 23:02:16.713: INFO: Got endpoints: latency-svc-n4khb [1.060887408s] Oct 14 23:02:16.835: INFO: Created: latency-svc-rkmp8 Oct 14 23:02:16.878: INFO: Got endpoints: latency-svc-rkmp8 [1.139821498s] Oct 14 23:02:16.949: INFO: Created: latency-svc-tlfl8 Oct 14 23:02:16.966: INFO: Got endpoints: latency-svc-tlfl8 [1.131724956s] Oct 14 23:02:17.084: INFO: Created: latency-svc-4cm6x Oct 14 23:02:17.089: INFO: Got endpoints: latency-svc-4cm6x [1.194447572s] Oct 14 23:02:17.118: INFO: Created: latency-svc-j78dn Oct 14 23:02:17.134: INFO: Got endpoints: latency-svc-j78dn [1.196400949s] Oct 14 23:02:17.183: INFO: Created: latency-svc-5t6t5 Oct 14 23:02:17.249: INFO: Got endpoints: latency-svc-5t6t5 [1.264500419s] Oct 14 23:02:17.251: INFO: Created: latency-svc-f4rgf Oct 14 23:02:17.266: INFO: Got endpoints: latency-svc-f4rgf [1.232803917s] Oct 14 23:02:17.321: INFO: Created: latency-svc-vrbcz Oct 14 23:02:17.383: INFO: Got endpoints: latency-svc-vrbcz [1.271260542s] Oct 14 23:02:17.476: INFO: Created: latency-svc-vm7bp Oct 14 23:02:17.533: INFO: Got endpoints: latency-svc-vm7bp [1.323578206s] Oct 14 23:02:17.603: INFO: Created: latency-svc-jrlc5 Oct 14 23:02:17.619: INFO: Got endpoints: latency-svc-jrlc5 [1.35024699s] Oct 14 23:02:17.702: INFO: Created: latency-svc-xpbfp Oct 14 23:02:17.709: INFO: Got endpoints: latency-svc-xpbfp [1.345381216s] Oct 14 23:02:17.740: INFO: Created: latency-svc-wb5f9 Oct 14 23:02:17.764: INFO: Got endpoints: latency-svc-wb5f9 [1.363593746s] Oct 14 23:02:17.875: INFO: Created: latency-svc-97bwz Oct 14 23:02:17.904: INFO: Got endpoints: latency-svc-97bwz [1.419430289s] Oct 14 23:02:17.933: INFO: Created: latency-svc-6qwxm Oct 14 23:02:17.943: INFO: Got endpoints: latency-svc-6qwxm [1.416785548s] Oct 14 23:02:18.030: INFO: Created: latency-svc-pc4hv Oct 14 23:02:18.053: INFO: Got endpoints: latency-svc-pc4hv [1.405300096s] Oct 14 23:02:18.107: INFO: Created: latency-svc-cmsv5 Oct 14 23:02:18.168: INFO: Got endpoints: latency-svc-cmsv5 [1.454480215s] Oct 14 23:02:18.191: INFO: Created: latency-svc-725b2 Oct 14 23:02:18.221: INFO: Got endpoints: latency-svc-725b2 [1.342624002s] Oct 14 23:02:18.263: INFO: Created: latency-svc-9cvgg Oct 14 23:02:18.317: INFO: Got endpoints: latency-svc-9cvgg [1.351123473s] Oct 14 23:02:18.359: INFO: Created: latency-svc-jjsc9 Oct 14 23:02:18.371: INFO: Got endpoints: latency-svc-jjsc9 [1.28271646s] Oct 14 23:02:18.401: INFO: Created: latency-svc-282mh Oct 14 23:02:18.455: INFO: Got endpoints: latency-svc-282mh [1.321112974s] Oct 14 23:02:18.485: INFO: Created: latency-svc-r6nv4 Oct 14 23:02:18.508: INFO: Got endpoints: latency-svc-r6nv4 [1.25851214s] Oct 14 23:02:18.544: INFO: Created: latency-svc-zldrh Oct 14 23:02:18.581: INFO: Got endpoints: latency-svc-zldrh [1.31492571s] Oct 14 23:02:18.604: INFO: Created: latency-svc-kdnb8 Oct 14 23:02:18.624: INFO: Got endpoints: latency-svc-kdnb8 [1.24067777s] Oct 14 23:02:18.653: INFO: Created: latency-svc-2t4zv Oct 14 23:02:18.666: INFO: Got endpoints: latency-svc-2t4zv [1.133324183s] Oct 14 23:02:18.720: INFO: Created: latency-svc-sv69h Oct 14 23:02:18.724: INFO: Got endpoints: latency-svc-sv69h [1.10524825s] Oct 14 23:02:18.785: INFO: Created: latency-svc-m96gf Oct 14 23:02:18.929: INFO: Got endpoints: latency-svc-m96gf [1.219873654s] Oct 14 23:02:18.932: INFO: Created: latency-svc-7cdx8 Oct 14 23:02:18.952: INFO: Got endpoints: latency-svc-7cdx8 [1.187874864s] Oct 14 23:02:18.995: INFO: Created: latency-svc-hrhb2 Oct 14 23:02:19.010: INFO: Got endpoints: latency-svc-hrhb2 [1.105956811s] Oct 14 23:02:19.084: INFO: Created: latency-svc-4xfgq Oct 14 23:02:19.088: INFO: Got endpoints: latency-svc-4xfgq [1.144731892s] Oct 14 23:02:19.119: INFO: Created: latency-svc-g2j4b Oct 14 23:02:19.136: INFO: Got endpoints: latency-svc-g2j4b [1.08263062s] Oct 14 23:02:19.228: INFO: Created: latency-svc-6hvcs Oct 14 23:02:19.233: INFO: Got endpoints: latency-svc-6hvcs [1.064786051s] Oct 14 23:02:19.289: INFO: Created: latency-svc-rgfzl Oct 14 23:02:19.298: INFO: Got endpoints: latency-svc-rgfzl [1.077700163s] Oct 14 23:02:19.432: INFO: Created: latency-svc-q9d2t Oct 14 23:02:19.454: INFO: Got endpoints: latency-svc-q9d2t [1.13731716s] Oct 14 23:02:19.493: INFO: Created: latency-svc-dh4rj Oct 14 23:02:19.517: INFO: Got endpoints: latency-svc-dh4rj [1.14546093s] Oct 14 23:02:19.581: INFO: Created: latency-svc-z6fzm Oct 14 23:02:19.607: INFO: Got endpoints: latency-svc-z6fzm [1.151745345s] Oct 14 23:02:19.731: INFO: Created: latency-svc-kk89f Oct 14 23:02:19.743: INFO: Got endpoints: latency-svc-kk89f [1.235092927s] Oct 14 23:02:19.763: INFO: Created: latency-svc-nx2ng Oct 14 23:02:19.773: INFO: Got endpoints: latency-svc-nx2ng [1.19254299s] Oct 14 23:02:19.829: INFO: Created: latency-svc-s7xcm Oct 14 23:02:19.886: INFO: Got endpoints: latency-svc-s7xcm [1.262220378s] Oct 14 23:02:19.912: INFO: Created: latency-svc-8ptrt Oct 14 23:02:19.942: INFO: Got endpoints: latency-svc-8ptrt [1.275900903s] Oct 14 23:02:19.973: INFO: Created: latency-svc-pvrb7 Oct 14 23:02:19.984: INFO: Got endpoints: latency-svc-pvrb7 [1.259516613s] Oct 14 23:02:20.030: INFO: Created: latency-svc-p4ds4 Oct 14 23:02:20.038: INFO: Got endpoints: latency-svc-p4ds4 [1.108802732s] Oct 14 23:02:20.072: INFO: Created: latency-svc-9rkgz Oct 14 23:02:20.081: INFO: Got endpoints: latency-svc-9rkgz [1.129493238s] Oct 14 23:02:20.104: INFO: Created: latency-svc-hh9qv Oct 14 23:02:20.179: INFO: Got endpoints: latency-svc-hh9qv [1.169619274s] Oct 14 23:02:20.182: INFO: Created: latency-svc-6fzsr Oct 14 23:02:20.190: INFO: Got endpoints: latency-svc-6fzsr [1.102124635s] Oct 14 23:02:20.212: INFO: Created: latency-svc-sppk2 Oct 14 23:02:20.249: INFO: Got endpoints: latency-svc-sppk2 [1.112939854s] Oct 14 23:02:20.318: INFO: Created: latency-svc-rqpmh Oct 14 23:02:20.321: INFO: Got endpoints: latency-svc-rqpmh [1.087978136s] Oct 14 23:02:20.369: INFO: Created: latency-svc-gjgn4 Oct 14 23:02:20.383: INFO: Got endpoints: latency-svc-gjgn4 [1.084416246s] Oct 14 23:02:20.411: INFO: Created: latency-svc-qt6xg Oct 14 23:02:20.442: INFO: Got endpoints: latency-svc-qt6xg [988.3061ms] Oct 14 23:02:20.459: INFO: Created: latency-svc-jwlzg Oct 14 23:02:20.488: INFO: Got endpoints: latency-svc-jwlzg [971.138631ms] Oct 14 23:02:20.518: INFO: Created: latency-svc-48d2d Oct 14 23:02:20.528: INFO: Got endpoints: latency-svc-48d2d [921.088239ms] Oct 14 23:02:20.599: INFO: Created: latency-svc-pxffq Oct 14 23:02:20.639: INFO: Got endpoints: latency-svc-pxffq [895.660146ms] Oct 14 23:02:20.641: INFO: Created: latency-svc-gr65n Oct 14 23:02:20.669: INFO: Got endpoints: latency-svc-gr65n [895.200441ms] Oct 14 23:02:20.730: INFO: Created: latency-svc-kdbvc Oct 14 23:02:20.758: INFO: Got endpoints: latency-svc-kdbvc [871.078863ms] Oct 14 23:02:20.758: INFO: Created: latency-svc-mng58 Oct 14 23:02:20.779: INFO: Got endpoints: latency-svc-mng58 [836.967426ms] Oct 14 23:02:20.825: INFO: Created: latency-svc-lwzfx Oct 14 23:02:20.862: INFO: Got endpoints: latency-svc-lwzfx [877.977116ms] Oct 14 23:02:20.885: INFO: Created: latency-svc-jt49s Oct 14 23:02:20.902: INFO: Got endpoints: latency-svc-jt49s [863.849023ms] Oct 14 23:02:20.950: INFO: Created: latency-svc-skzgr Oct 14 23:02:21.000: INFO: Got endpoints: latency-svc-skzgr [918.254889ms] Oct 14 23:02:21.004: INFO: Created: latency-svc-jz5xn Oct 14 23:02:21.040: INFO: Got endpoints: latency-svc-jz5xn [860.82168ms] Oct 14 23:02:21.083: INFO: Created: latency-svc-68nnh Oct 14 23:02:21.156: INFO: Got endpoints: latency-svc-68nnh [965.027813ms] Oct 14 23:02:21.179: INFO: Created: latency-svc-9kwcv Oct 14 23:02:21.209: INFO: Got endpoints: latency-svc-9kwcv [960.359801ms] Oct 14 23:02:21.240: INFO: Created: latency-svc-8l5jf Oct 14 23:02:21.329: INFO: Got endpoints: latency-svc-8l5jf [1.008665363s] Oct 14 23:02:21.332: INFO: Created: latency-svc-qlpns Oct 14 23:02:21.376: INFO: Got endpoints: latency-svc-qlpns [993.10242ms] Oct 14 23:02:21.534: INFO: Created: latency-svc-vzct2 Oct 14 23:02:21.538: INFO: Got endpoints: latency-svc-vzct2 [1.095289775s] Oct 14 23:02:21.611: INFO: Created: latency-svc-cv4ll Oct 14 23:02:21.706: INFO: Got endpoints: latency-svc-cv4ll [1.218296179s] Oct 14 23:02:21.713: INFO: Created: latency-svc-5kbxd Oct 14 23:02:21.743: INFO: Got endpoints: latency-svc-5kbxd [1.214848753s] Oct 14 23:02:21.841: INFO: Created: latency-svc-tb6mb Oct 14 23:02:21.875: INFO: Got endpoints: latency-svc-tb6mb [1.235760836s] Oct 14 23:02:21.918: INFO: Created: latency-svc-l8cfd Oct 14 23:02:21.928: INFO: Got endpoints: latency-svc-l8cfd [1.25938962s] Oct 14 23:02:21.975: INFO: Created: latency-svc-zmb5z Oct 14 23:02:21.982: INFO: Got endpoints: latency-svc-zmb5z [1.224310175s] Oct 14 23:02:22.001: INFO: Created: latency-svc-4gwrm Oct 14 23:02:22.012: INFO: Got endpoints: latency-svc-4gwrm [1.232904949s] Oct 14 23:02:22.031: INFO: Created: latency-svc-648d6 Oct 14 23:02:22.055: INFO: Got endpoints: latency-svc-648d6 [1.192977596s] Oct 14 23:02:22.102: INFO: Created: latency-svc-pjttj Oct 14 23:02:22.109: INFO: Got endpoints: latency-svc-pjttj [1.207448422s] Oct 14 23:02:22.144: INFO: Created: latency-svc-5bnqs Oct 14 23:02:22.158: INFO: Got endpoints: latency-svc-5bnqs [1.158226843s] Oct 14 23:02:22.181: INFO: Created: latency-svc-g6jr8 Oct 14 23:02:22.194: INFO: Got endpoints: latency-svc-g6jr8 [1.153843133s] Oct 14 23:02:22.239: INFO: Created: latency-svc-8t2t9 Oct 14 23:02:22.248: INFO: Got endpoints: latency-svc-8t2t9 [1.092578398s] Oct 14 23:02:22.271: INFO: Created: latency-svc-9rmgc Oct 14 23:02:22.284: INFO: Got endpoints: latency-svc-9rmgc [1.075025758s] Oct 14 23:02:22.312: INFO: Created: latency-svc-q9c9b Oct 14 23:02:22.377: INFO: Got endpoints: latency-svc-q9c9b [1.047570413s] Oct 14 23:02:22.391: INFO: Created: latency-svc-lrr6n Oct 14 23:02:22.405: INFO: Got endpoints: latency-svc-lrr6n [1.028974579s] Oct 14 23:02:22.427: INFO: Created: latency-svc-5lmhq Oct 14 23:02:22.435: INFO: Got endpoints: latency-svc-5lmhq [896.999519ms] Oct 14 23:02:22.457: INFO: Created: latency-svc-jzws6 Oct 14 23:02:22.473: INFO: Got endpoints: latency-svc-jzws6 [766.817552ms] Oct 14 23:02:22.505: INFO: Created: latency-svc-f5pm4 Oct 14 23:02:22.534: INFO: Got endpoints: latency-svc-f5pm4 [791.368143ms] Oct 14 23:02:22.583: INFO: Created: latency-svc-gjpj9 Oct 14 23:02:22.595: INFO: Got endpoints: latency-svc-gjpj9 [719.84188ms] Oct 14 23:02:22.637: INFO: Created: latency-svc-q4n2j Oct 14 23:02:22.642: INFO: Got endpoints: latency-svc-q4n2j [714.140199ms] Oct 14 23:02:22.673: INFO: Created: latency-svc-4bhrb Oct 14 23:02:22.678: INFO: Got endpoints: latency-svc-4bhrb [695.951668ms] Oct 14 23:02:22.773: INFO: Created: latency-svc-s8tm6 Oct 14 23:02:22.777: INFO: Got endpoints: latency-svc-s8tm6 [764.355693ms] Oct 14 23:02:22.805: INFO: Created: latency-svc-7n242 Oct 14 23:02:22.829: INFO: Got endpoints: latency-svc-7n242 [774.043628ms] Oct 14 23:02:22.871: INFO: Created: latency-svc-bfs7j Oct 14 23:02:22.976: INFO: Got endpoints: latency-svc-bfs7j [866.098805ms] Oct 14 23:02:22.984: INFO: Created: latency-svc-7r9xv Oct 14 23:02:22.997: INFO: Got endpoints: latency-svc-7r9xv [838.920702ms] Oct 14 23:02:23.021: INFO: Created: latency-svc-mbfcf Oct 14 23:02:23.039: INFO: Got endpoints: latency-svc-mbfcf [845.066442ms] Oct 14 23:02:23.057: INFO: Created: latency-svc-khr4n Oct 14 23:02:23.113: INFO: Got endpoints: latency-svc-khr4n [865.114084ms] Oct 14 23:02:23.142: INFO: Created: latency-svc-sb7km Oct 14 23:02:23.154: INFO: Got endpoints: latency-svc-sb7km [869.71458ms] Oct 14 23:02:23.177: INFO: Created: latency-svc-xr7r5 Oct 14 23:02:23.191: INFO: Got endpoints: latency-svc-xr7r5 [813.933066ms] Oct 14 23:02:23.262: INFO: Created: latency-svc-gmtv9 Oct 14 23:02:23.281: INFO: Got endpoints: latency-svc-gmtv9 [875.60249ms] Oct 14 23:02:23.298: INFO: Created: latency-svc-tphrf Oct 14 23:02:23.311: INFO: Got endpoints: latency-svc-tphrf [875.911474ms] Oct 14 23:02:23.333: INFO: Created: latency-svc-t8xdt Oct 14 23:02:23.396: INFO: Got endpoints: latency-svc-t8xdt [922.471755ms] Oct 14 23:02:23.398: INFO: Created: latency-svc-nwsrv Oct 14 23:02:23.407: INFO: Got endpoints: latency-svc-nwsrv [872.810228ms] Oct 14 23:02:23.436: INFO: Created: latency-svc-p5jmw Oct 14 23:02:23.449: INFO: Got endpoints: latency-svc-p5jmw [854.740755ms] Oct 14 23:02:23.489: INFO: Created: latency-svc-5cbkn Oct 14 23:02:23.539: INFO: Got endpoints: latency-svc-5cbkn [896.657985ms] Oct 14 23:02:23.549: INFO: Created: latency-svc-2j588 Oct 14 23:02:23.574: INFO: Got endpoints: latency-svc-2j588 [895.940729ms] Oct 14 23:02:23.610: INFO: Created: latency-svc-cwbnf Oct 14 23:02:23.635: INFO: Got endpoints: latency-svc-cwbnf [858.347976ms] Oct 14 23:02:23.671: INFO: Created: latency-svc-42ds6 Oct 14 23:02:23.684: INFO: Got endpoints: latency-svc-42ds6 [854.853386ms] Oct 14 23:02:23.736: INFO: Created: latency-svc-bdhst Oct 14 23:02:23.750: INFO: Got endpoints: latency-svc-bdhst [774.016127ms] Oct 14 23:02:23.809: INFO: Created: latency-svc-ddsm6 Oct 14 23:02:23.828: INFO: Got endpoints: latency-svc-ddsm6 [830.866385ms] Oct 14 23:02:23.874: INFO: Created: latency-svc-cxvkm Oct 14 23:02:23.888: INFO: Got endpoints: latency-svc-cxvkm [848.978919ms] Oct 14 23:02:23.970: INFO: Created: latency-svc-snvp8 Oct 14 23:02:23.973: INFO: Got endpoints: latency-svc-snvp8 [859.779836ms] Oct 14 23:02:24.054: INFO: Created: latency-svc-zcxl4 Oct 14 23:02:24.150: INFO: Got endpoints: latency-svc-zcxl4 [995.610665ms] Oct 14 23:02:24.152: INFO: Created: latency-svc-dc96h Oct 14 23:02:24.165: INFO: Got endpoints: latency-svc-dc96h [973.451027ms] Oct 14 23:02:24.197: INFO: Created: latency-svc-wg278 Oct 14 23:02:24.213: INFO: Got endpoints: latency-svc-wg278 [932.013783ms] Oct 14 23:02:24.288: INFO: Created: latency-svc-2xd82 Oct 14 23:02:24.317: INFO: Got endpoints: latency-svc-2xd82 [1.006064226s] Oct 14 23:02:24.317: INFO: Created: latency-svc-pdfhz Oct 14 23:02:24.348: INFO: Got endpoints: latency-svc-pdfhz [951.765253ms] Oct 14 23:02:24.385: INFO: Created: latency-svc-qfm8t Oct 14 23:02:24.425: INFO: Got endpoints: latency-svc-qfm8t [1.017715197s] Oct 14 23:02:24.432: INFO: Created: latency-svc-cbcv8 Oct 14 23:02:24.467: INFO: Got endpoints: latency-svc-cbcv8 [1.0178705s] Oct 14 23:02:24.505: INFO: Created: latency-svc-n2fvs Oct 14 23:02:24.520: INFO: Got endpoints: latency-svc-n2fvs [981.124324ms] Oct 14 23:02:24.637: INFO: Created: latency-svc-8wcnb Oct 14 23:02:25.126: INFO: Got endpoints: latency-svc-8wcnb [1.551668347s] Oct 14 23:02:25.337: INFO: Created: latency-svc-nmbp2 Oct 14 23:02:25.348: INFO: Got endpoints: latency-svc-nmbp2 [1.712494992s] Oct 14 23:02:25.489: INFO: Created: latency-svc-lrxsb Oct 14 23:02:25.516: INFO: Got endpoints: latency-svc-lrxsb [1.831659254s] Oct 14 23:02:25.636: INFO: Created: latency-svc-w6gms Oct 14 23:02:25.679: INFO: Got endpoints: latency-svc-w6gms [1.929222382s] Oct 14 23:02:25.778: INFO: Created: latency-svc-9wg74 Oct 14 23:02:25.788: INFO: Got endpoints: latency-svc-9wg74 [1.959701922s] Oct 14 23:02:25.855: INFO: Created: latency-svc-b9r8k Oct 14 23:02:25.887: INFO: Got endpoints: latency-svc-b9r8k [1.998641154s] Oct 14 23:02:26.032: INFO: Created: latency-svc-w7jws Oct 14 23:02:26.065: INFO: Got endpoints: latency-svc-w7jws [2.092220477s] Oct 14 23:02:26.096: INFO: Created: latency-svc-tmmwk Oct 14 23:02:26.125: INFO: Got endpoints: latency-svc-tmmwk [1.975403178s] Oct 14 23:02:26.167: INFO: Created: latency-svc-2vrrx Oct 14 23:02:26.183: INFO: Got endpoints: latency-svc-2vrrx [2.018774652s] Oct 14 23:02:26.203: INFO: Created: latency-svc-wk2wz Oct 14 23:02:26.214: INFO: Got endpoints: latency-svc-wk2wz [2.001137611s] Oct 14 23:02:26.239: INFO: Created: latency-svc-t2q22 Oct 14 23:02:26.250: INFO: Got endpoints: latency-svc-t2q22 [1.933072543s] Oct 14 23:02:26.311: INFO: Created: latency-svc-nlxgp Oct 14 23:02:26.316: INFO: Got endpoints: latency-svc-nlxgp [1.968081883s] Oct 14 23:02:26.347: INFO: Created: latency-svc-zp977 Oct 14 23:02:26.365: INFO: Got endpoints: latency-svc-zp977 [1.940421292s] Oct 14 23:02:26.382: INFO: Created: latency-svc-6m9s2 Oct 14 23:02:26.407: INFO: Got endpoints: latency-svc-6m9s2 [1.939387145s] Oct 14 23:02:26.475: INFO: Created: latency-svc-lxhjs Oct 14 23:02:26.497: INFO: Created: latency-svc-kj87m Oct 14 23:02:26.497: INFO: Got endpoints: latency-svc-lxhjs [1.977083907s] Oct 14 23:02:26.521: INFO: Got endpoints: latency-svc-kj87m [1.395056443s] Oct 14 23:02:26.551: INFO: Created: latency-svc-8n42v Oct 14 23:02:26.564: INFO: Got endpoints: latency-svc-8n42v [1.216114304s] Oct 14 23:02:26.616: INFO: Created: latency-svc-m7jdk Oct 14 23:02:26.630: INFO: Got endpoints: latency-svc-m7jdk [1.114751442s] Oct 14 23:02:26.653: INFO: Created: latency-svc-d26q9 Oct 14 23:02:26.666: INFO: Got endpoints: latency-svc-d26q9 [987.464448ms] Oct 14 23:02:26.690: INFO: Created: latency-svc-lfjsz Oct 14 23:02:26.737: INFO: Got endpoints: latency-svc-lfjsz [949.595183ms] Oct 14 23:02:26.756: INFO: Created: latency-svc-tfjff Oct 14 23:02:26.785: INFO: Got endpoints: latency-svc-tfjff [898.446828ms] Oct 14 23:02:26.917: INFO: Created: latency-svc-bc46n Oct 14 23:02:26.922: INFO: Got endpoints: latency-svc-bc46n [856.065739ms] Oct 14 23:02:26.953: INFO: Created: latency-svc-pnmb2 Oct 14 23:02:26.977: INFO: Got endpoints: latency-svc-pnmb2 [851.50397ms] Oct 14 23:02:27.072: INFO: Created: latency-svc-d5p5z Oct 14 23:02:27.083: INFO: Got endpoints: latency-svc-d5p5z [900.107161ms] Oct 14 23:02:27.127: INFO: Created: latency-svc-76dhp Oct 14 23:02:27.147: INFO: Got endpoints: latency-svc-76dhp [932.635383ms] Oct 14 23:02:27.168: INFO: Created: latency-svc-pl6w7 Oct 14 23:02:27.216: INFO: Got endpoints: latency-svc-pl6w7 [965.418002ms] Oct 14 23:02:27.234: INFO: Created: latency-svc-n7f6r Oct 14 23:02:27.263: INFO: Got endpoints: latency-svc-n7f6r [946.692171ms] Oct 14 23:02:27.295: INFO: Created: latency-svc-fwnt6 Oct 14 23:02:27.309: INFO: Got endpoints: latency-svc-fwnt6 [944.213263ms] Oct 14 23:02:27.360: INFO: Created: latency-svc-q6wgz Oct 14 23:02:27.384: INFO: Got endpoints: latency-svc-q6wgz [977.185726ms] Oct 14 23:02:27.421: INFO: Created: latency-svc-chlvt Oct 14 23:02:27.448: INFO: Got endpoints: latency-svc-chlvt [950.488859ms] Oct 14 23:02:27.493: INFO: Created: latency-svc-ld6m4 Oct 14 23:02:27.508: INFO: Got endpoints: latency-svc-ld6m4 [987.376506ms] Oct 14 23:02:27.536: INFO: Created: latency-svc-n4prt Oct 14 23:02:27.550: INFO: Got endpoints: latency-svc-n4prt [986.435294ms] Oct 14 23:02:27.647: INFO: Created: latency-svc-spkpm Oct 14 23:02:27.673: INFO: Got endpoints: latency-svc-spkpm [1.042097784s] Oct 14 23:02:27.674: INFO: Created: latency-svc-gt926 Oct 14 23:02:27.690: INFO: Got endpoints: latency-svc-gt926 [1.022960038s] Oct 14 23:02:27.719: INFO: Created: latency-svc-r79xn Oct 14 23:02:27.732: INFO: Got endpoints: latency-svc-r79xn [994.506705ms] Oct 14 23:02:27.778: INFO: Created: latency-svc-g444r Oct 14 23:02:27.787: INFO: Got endpoints: latency-svc-g444r [1.000979265s] Oct 14 23:02:27.824: INFO: Created: latency-svc-8qkdh Oct 14 23:02:27.864: INFO: Got endpoints: latency-svc-8qkdh [942.669808ms] Oct 14 23:02:27.929: INFO: Created: latency-svc-298nd Oct 14 23:02:27.955: INFO: Got endpoints: latency-svc-298nd [978.105547ms] Oct 14 23:02:27.993: INFO: Created: latency-svc-xnjr5 Oct 14 23:02:28.060: INFO: Got endpoints: latency-svc-xnjr5 [976.080301ms] Oct 14 23:02:28.087: INFO: Created: latency-svc-4f4p4 Oct 14 23:02:28.117: INFO: Got endpoints: latency-svc-4f4p4 [970.227943ms] Oct 14 23:02:28.141: INFO: Created: latency-svc-47zdz Oct 14 23:02:28.154: INFO: Got endpoints: latency-svc-47zdz [938.167065ms] Oct 14 23:02:28.214: INFO: Created: latency-svc-t62dj Oct 14 23:02:28.226: INFO: Got endpoints: latency-svc-t62dj [963.576592ms] Oct 14 23:02:28.255: INFO: Created: latency-svc-lj5lc Oct 14 23:02:28.268: INFO: Got endpoints: latency-svc-lj5lc [958.66708ms] Oct 14 23:02:28.317: INFO: Created: latency-svc-7vzhk Oct 14 23:02:28.323: INFO: Got endpoints: latency-svc-7vzhk [938.458034ms] Oct 14 23:02:28.370: INFO: Created: latency-svc-bccs5 Oct 14 23:02:28.383: INFO: Got endpoints: latency-svc-bccs5 [934.874068ms] Oct 14 23:02:28.404: INFO: Created: latency-svc-gdm8z Oct 14 23:02:28.449: INFO: Got endpoints: latency-svc-gdm8z [940.716569ms] Oct 14 23:02:28.471: INFO: Created: latency-svc-9p7g6 Oct 14 23:02:28.479: INFO: Got endpoints: latency-svc-9p7g6 [928.897235ms] Oct 14 23:02:28.501: INFO: Created: latency-svc-lw7zk Oct 14 23:02:28.516: INFO: Got endpoints: latency-svc-lw7zk [843.036179ms] Oct 14 23:02:28.543: INFO: Created: latency-svc-fzg25 Oct 14 23:02:28.574: INFO: Got endpoints: latency-svc-fzg25 [884.579249ms] Oct 14 23:02:28.592: INFO: Created: latency-svc-n4xzt Oct 14 23:02:28.620: INFO: Got endpoints: latency-svc-n4xzt [888.318544ms] Oct 14 23:02:28.663: INFO: Created: latency-svc-twlqp Oct 14 23:02:28.713: INFO: Got endpoints: latency-svc-twlqp [926.265553ms] Oct 14 23:02:28.724: INFO: Created: latency-svc-mnpbl Oct 14 23:02:28.747: INFO: Got endpoints: latency-svc-mnpbl [882.51911ms] Oct 14 23:02:28.780: INFO: Created: latency-svc-vzrkx Oct 14 23:02:28.807: INFO: Got endpoints: latency-svc-vzrkx [851.865358ms] Oct 14 23:02:28.856: INFO: Created: latency-svc-zj8kj Oct 14 23:02:28.865: INFO: Got endpoints: latency-svc-zj8kj [805.492361ms] Oct 14 23:02:28.903: INFO: Created: latency-svc-ntzrd Oct 14 23:02:28.914: INFO: Got endpoints: latency-svc-ntzrd [796.736583ms] Oct 14 23:02:28.946: INFO: Created: latency-svc-sq6fn Oct 14 23:02:28.982: INFO: Got endpoints: latency-svc-sq6fn [827.989497ms] Oct 14 23:02:28.993: INFO: Created: latency-svc-swljf Oct 14 23:02:29.023: INFO: Got endpoints: latency-svc-swljf [797.082011ms] Oct 14 23:02:29.053: INFO: Created: latency-svc-7q6f6 Oct 14 23:02:29.070: INFO: Got endpoints: latency-svc-7q6f6 [801.995638ms] Oct 14 23:02:29.114: INFO: Created: latency-svc-9vk72 Oct 14 23:02:29.149: INFO: Got endpoints: latency-svc-9vk72 [826.017397ms] Oct 14 23:02:29.149: INFO: Created: latency-svc-rzjc8 Oct 14 23:02:29.179: INFO: Got endpoints: latency-svc-rzjc8 [796.258992ms] Oct 14 23:02:29.282: INFO: Created: latency-svc-nrh56 Oct 14 23:02:29.323: INFO: Got endpoints: latency-svc-nrh56 [873.62437ms] Oct 14 23:02:29.324: INFO: Created: latency-svc-nfmnc Oct 14 23:02:29.359: INFO: Got endpoints: latency-svc-nfmnc [879.109285ms] Oct 14 23:02:29.426: INFO: Created: latency-svc-4w455 Oct 14 23:02:29.429: INFO: Got endpoints: latency-svc-4w455 [913.107829ms] Oct 14 23:02:29.429: INFO: Latencies: [90.339123ms 169.125948ms 222.76799ms 294.755463ms 373.516395ms 396.609731ms 487.860274ms 541.202035ms 635.341465ms 695.951668ms 698.167783ms 714.140199ms 719.84188ms 762.32372ms 764.355693ms 766.817552ms 774.016127ms 774.043628ms 791.368143ms 796.258992ms 796.736583ms 797.082011ms 801.995638ms 805.492361ms 813.933066ms 826.017397ms 827.989497ms 830.866385ms 836.967426ms 838.920702ms 843.036179ms 845.066442ms 848.209589ms 848.978919ms 851.50397ms 851.865358ms 854.740755ms 854.853386ms 856.065739ms 858.347976ms 859.779836ms 860.82168ms 863.849023ms 865.114084ms 866.098805ms 869.71458ms 871.078863ms 872.810228ms 873.62437ms 875.60249ms 875.911474ms 877.977116ms 879.109285ms 882.51911ms 884.579249ms 888.318544ms 895.200441ms 895.660146ms 895.940729ms 896.657985ms 896.999519ms 898.446828ms 900.107161ms 913.107829ms 918.254889ms 921.088239ms 922.471755ms 926.265553ms 928.897235ms 932.013783ms 932.635383ms 934.874068ms 938.167065ms 938.458034ms 940.716569ms 942.669808ms 944.213263ms 944.261978ms 946.692171ms 949.595183ms 950.488859ms 951.765253ms 958.66708ms 960.359801ms 963.576592ms 965.027813ms 965.418002ms 970.227943ms 971.138631ms 973.451027ms 974.180951ms 976.080301ms 977.185726ms 978.105547ms 981.124324ms 986.435294ms 987.376506ms 987.464448ms 988.3061ms 993.10242ms 994.506705ms 995.610665ms 999.247723ms 1.000979265s 1.001656039s 1.004718547s 1.005006515s 1.005406457s 1.006064226s 1.008665363s 1.017715197s 1.0178705s 1.022366594s 1.022960038s 1.025328547s 1.028974579s 1.042097784s 1.047296667s 1.047570413s 1.053203313s 1.060200315s 1.060887408s 1.064786051s 1.075025758s 1.077089127s 1.077700163s 1.08263062s 1.084416246s 1.087978136s 1.092578398s 1.095289775s 1.102124635s 1.10524825s 1.105956811s 1.108802732s 1.112939854s 1.114751442s 1.129493238s 1.131724956s 1.133324183s 1.13731716s 1.139821498s 1.144731892s 1.14546093s 1.151745345s 1.153843133s 1.158226843s 1.169619274s 1.187874864s 1.19254299s 1.192977596s 1.194447572s 1.196400949s 1.207448422s 1.214848753s 1.216114304s 1.218296179s 1.219873654s 1.224310175s 1.232803917s 1.232904949s 1.235092927s 1.235760836s 1.24067777s 1.25851214s 1.25938962s 1.259516613s 1.262220378s 1.264500419s 1.271260542s 1.275900903s 1.28271646s 1.31492571s 1.321112974s 1.323578206s 1.342624002s 1.345381216s 1.35024699s 1.351123473s 1.363593746s 1.395056443s 1.405300096s 1.416785548s 1.419430289s 1.454480215s 1.551668347s 1.712494992s 1.831659254s 1.929222382s 1.933072543s 1.939387145s 1.940421292s 1.959701922s 1.968081883s 1.975403178s 1.977083907s 1.998641154s 2.001137611s 2.018774652s 2.092220477s] Oct 14 23:02:29.429: INFO: 50 %ile: 994.506705ms Oct 14 23:02:29.429: INFO: 90 %ile: 1.395056443s Oct 14 23:02:29.429: INFO: 99 %ile: 2.018774652s Oct 14 23:02:29.429: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:02:29.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-9274" for this suite. • [SLOW TEST:19.326 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":22,"skipped":431,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:02:29.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-b516cc2a-c3e7-47b4-854d-5c90ef12ac75 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-b516cc2a-c3e7-47b4-854d-5c90ef12ac75 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:02:35.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4386" for this suite. • [SLOW TEST:6.782 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":23,"skipped":443,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:02:36.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:02:36.442: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c3ac900-a64d-486d-9cbd-04d97aecda75" in namespace "projected-3295" to be "Succeeded or Failed" Oct 14 23:02:36.528: INFO: Pod "downwardapi-volume-8c3ac900-a64d-486d-9cbd-04d97aecda75": Phase="Pending", Reason="", readiness=false. Elapsed: 85.018808ms Oct 14 23:02:38.713: INFO: Pod "downwardapi-volume-8c3ac900-a64d-486d-9cbd-04d97aecda75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.270343488s Oct 14 23:02:40.792: INFO: Pod "downwardapi-volume-8c3ac900-a64d-486d-9cbd-04d97aecda75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.349314654s STEP: Saw pod success Oct 14 23:02:40.792: INFO: Pod "downwardapi-volume-8c3ac900-a64d-486d-9cbd-04d97aecda75" satisfied condition "Succeeded or Failed" Oct 14 23:02:40.796: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-8c3ac900-a64d-486d-9cbd-04d97aecda75 container client-container: STEP: delete the pod Oct 14 23:02:40.996: INFO: Waiting for pod downwardapi-volume-8c3ac900-a64d-486d-9cbd-04d97aecda75 to disappear Oct 14 23:02:41.031: INFO: Pod downwardapi-volume-8c3ac900-a64d-486d-9cbd-04d97aecda75 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:02:41.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3295" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":24,"skipped":453,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:02:41.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 14 23:02:41.175: INFO: Waiting up to 5m0s for pod "pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3" in namespace "emptydir-859" to be "Succeeded or Failed" Oct 14 23:02:41.211: INFO: Pod "pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 36.757817ms Oct 14 23:02:43.269: INFO: Pod "pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094229082s Oct 14 23:02:45.282: INFO: Pod "pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107615198s Oct 14 23:02:47.328: INFO: Pod "pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.153591503s STEP: Saw pod success Oct 14 23:02:47.328: INFO: Pod "pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3" satisfied condition "Succeeded or Failed" Oct 14 23:02:47.365: INFO: Trying to get logs from node leguer-worker pod pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3 container test-container: STEP: delete the pod Oct 14 23:02:47.510: INFO: Waiting for pod pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3 to disappear Oct 14 23:02:47.519: INFO: Pod pod-317cb1bf-14f2-4736-a85c-1a5a45c88cd3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:02:47.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-859" for this suite. • [SLOW TEST:6.493 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":25,"skipped":466,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:02:47.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:02:47.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a808aea5-8998-4c16-b373-276738c67752" in namespace "downward-api-7483" to be "Succeeded or Failed" Oct 14 23:02:47.670: INFO: Pod "downwardapi-volume-a808aea5-8998-4c16-b373-276738c67752": Phase="Pending", Reason="", readiness=false. Elapsed: 11.000406ms Oct 14 23:02:49.705: INFO: Pod "downwardapi-volume-a808aea5-8998-4c16-b373-276738c67752": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045786632s Oct 14 23:02:51.724: INFO: Pod "downwardapi-volume-a808aea5-8998-4c16-b373-276738c67752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065451016s STEP: Saw pod success Oct 14 23:02:51.724: INFO: Pod "downwardapi-volume-a808aea5-8998-4c16-b373-276738c67752" satisfied condition "Succeeded or Failed" Oct 14 23:02:51.754: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-a808aea5-8998-4c16-b373-276738c67752 container client-container: STEP: delete the pod Oct 14 23:02:51.844: INFO: Waiting for pod downwardapi-volume-a808aea5-8998-4c16-b373-276738c67752 to disappear Oct 14 23:02:51.850: INFO: Pod downwardapi-volume-a808aea5-8998-4c16-b373-276738c67752 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:02:51.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7483" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":26,"skipped":487,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:02:51.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 14 23:02:52.162: INFO: Waiting up to 1m0s for all nodes to be ready Oct 14 23:03:52.230: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:03:52.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 14 23:03:56.779: INFO: found a healthy node: leguer-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:04:16.911: INFO: pods created so far: [1 1 1] Oct 14 23:04:16.911: INFO: length of pods created so far: 3 Oct 14 23:04:34.920: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:04:41.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-7997" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:04:41.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-1898" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:110.105 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":27,"skipped":505,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:04:42.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-85afaa51-e3ec-4ab1-9d22-e32f077390a7 STEP: Creating a pod to test consume configMaps Oct 14 23:04:42.104: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7" in namespace "projected-5638" to be "Succeeded or Failed" Oct 14 23:04:42.107: INFO: Pod "pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.124339ms Oct 14 23:04:44.111: INFO: Pod "pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007532754s Oct 14 23:04:46.116: INFO: Pod "pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.012298994s Oct 14 23:04:48.188: INFO: Pod "pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.084177659s STEP: Saw pod success Oct 14 23:04:48.188: INFO: Pod "pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7" satisfied condition "Succeeded or Failed" Oct 14 23:04:48.257: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7 container projected-configmap-volume-test: STEP: delete the pod Oct 14 23:04:48.540: INFO: Waiting for pod pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7 to disappear Oct 14 23:04:48.678: INFO: Pod pod-projected-configmaps-3fd3935c-9933-4152-a8eb-bafacccd23d7 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:04:48.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5638" for this suite. • [SLOW TEST:6.675 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":28,"skipped":506,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:04:48.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Oct 14 23:04:48.808: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:04:48.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6267" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":29,"skipped":525,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:04:48.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4641.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4641.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4641.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-4641.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-4641.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4641.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 23:04:57.080: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:04:57.083: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:04:57.087: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:04:57.090: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:04:57.098: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:04:57.101: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:04:57.104: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:04:57.107: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:04:57.113: INFO: Lookups using dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local] Oct 14 23:05:02.118: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:02.122: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:02.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:02.127: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:02.135: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:02.138: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:02.140: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:02.142: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:02.148: INFO: Lookups using dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local] Oct 14 23:05:07.118: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:07.121: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:07.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:07.128: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:07.139: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:07.143: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:07.146: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:07.149: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:07.156: INFO: Lookups using dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local] Oct 14 23:05:12.116: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:12.119: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:12.122: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:12.124: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:12.131: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:12.134: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:12.137: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:12.139: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:12.145: INFO: Lookups using dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local] Oct 14 23:05:17.118: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:17.122: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:17.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:17.129: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:17.138: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:17.141: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:17.144: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:17.147: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:17.154: INFO: Lookups using dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local] Oct 14 23:05:22.117: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:22.121: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:22.125: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:22.128: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:22.258: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:22.262: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:22.265: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:22.268: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local from pod dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344: the server could not find the requested resource (get pods dns-test-fda8f947-4e80-46db-bedf-4b8c72675344) Oct 14 23:05:22.273: INFO: Lookups using dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4641.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4641.svc.cluster.local jessie_udp@dns-test-service-2.dns-4641.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4641.svc.cluster.local] Oct 14 23:05:27.152: INFO: DNS probes using dns-4641/dns-test-fda8f947-4e80-46db-bedf-4b8c72675344 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:05:27.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4641" for this suite. • [SLOW TEST:38.766 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":30,"skipped":528,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:05:27.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:05:27.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7485" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":31,"skipped":554,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:05:27.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 14 23:05:28.014: INFO: Waiting up to 5m0s for pod "downward-api-cc744986-82e2-4c71-b4f6-0bca5ff00d27" in namespace "downward-api-2055" to be "Succeeded or Failed" Oct 14 23:05:28.025: INFO: Pod "downward-api-cc744986-82e2-4c71-b4f6-0bca5ff00d27": Phase="Pending", Reason="", readiness=false. Elapsed: 11.137718ms Oct 14 23:05:30.327: INFO: Pod "downward-api-cc744986-82e2-4c71-b4f6-0bca5ff00d27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.312774269s Oct 14 23:05:32.331: INFO: Pod "downward-api-cc744986-82e2-4c71-b4f6-0bca5ff00d27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.316973356s STEP: Saw pod success Oct 14 23:05:32.331: INFO: Pod "downward-api-cc744986-82e2-4c71-b4f6-0bca5ff00d27" satisfied condition "Succeeded or Failed" Oct 14 23:05:32.333: INFO: Trying to get logs from node leguer-worker pod downward-api-cc744986-82e2-4c71-b4f6-0bca5ff00d27 container dapi-container: STEP: delete the pod Oct 14 23:05:32.377: INFO: Waiting for pod downward-api-cc744986-82e2-4c71-b4f6-0bca5ff00d27 to disappear Oct 14 23:05:32.384: INFO: Pod downward-api-cc744986-82e2-4c71-b4f6-0bca5ff00d27 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:05:32.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2055" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":32,"skipped":627,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:05:32.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:05:32.489: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b8dcf133-1a48-4b2f-85ce-11ab6aa00b42" in namespace "security-context-test-8101" to be "Succeeded or Failed" Oct 14 23:05:32.506: INFO: Pod "busybox-privileged-false-b8dcf133-1a48-4b2f-85ce-11ab6aa00b42": Phase="Pending", Reason="", readiness=false. Elapsed: 16.310506ms Oct 14 23:05:34.510: INFO: Pod "busybox-privileged-false-b8dcf133-1a48-4b2f-85ce-11ab6aa00b42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020821431s Oct 14 23:05:36.515: INFO: Pod "busybox-privileged-false-b8dcf133-1a48-4b2f-85ce-11ab6aa00b42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02581845s Oct 14 23:05:36.515: INFO: Pod "busybox-privileged-false-b8dcf133-1a48-4b2f-85ce-11ab6aa00b42" satisfied condition "Succeeded or Failed" Oct 14 23:05:36.522: INFO: Got logs for pod "busybox-privileged-false-b8dcf133-1a48-4b2f-85ce-11ab6aa00b42": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:05:36.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8101" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":33,"skipped":642,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:05:36.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 14 23:05:36.626: INFO: Waiting up to 5m0s for pod "pod-db0a12c7-af16-4140-be0d-ab1628443b5b" in namespace "emptydir-2683" to be "Succeeded or Failed" Oct 14 23:05:36.630: INFO: Pod "pod-db0a12c7-af16-4140-be0d-ab1628443b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.603751ms Oct 14 23:05:38.634: INFO: Pod "pod-db0a12c7-af16-4140-be0d-ab1628443b5b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007752803s Oct 14 23:05:40.638: INFO: Pod "pod-db0a12c7-af16-4140-be0d-ab1628443b5b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012388663s STEP: Saw pod success Oct 14 23:05:40.639: INFO: Pod "pod-db0a12c7-af16-4140-be0d-ab1628443b5b" satisfied condition "Succeeded or Failed" Oct 14 23:05:40.641: INFO: Trying to get logs from node leguer-worker pod pod-db0a12c7-af16-4140-be0d-ab1628443b5b container test-container: STEP: delete the pod Oct 14 23:05:40.672: INFO: Waiting for pod pod-db0a12c7-af16-4140-be0d-ab1628443b5b to disappear Oct 14 23:05:40.678: INFO: Pod pod-db0a12c7-af16-4140-be0d-ab1628443b5b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:05:40.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2683" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":34,"skipped":648,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:05:40.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9860 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 14 23:05:40.791: INFO: Found 0 stateful pods, waiting for 3 Oct 14 23:05:50.795: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:05:50.795: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:05:50.795: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 14 23:06:00.796: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:06:00.796: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:06:00.796: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:06:00.805: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9860 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:06:03.778: INFO: stderr: "I1014 23:06:03.644038 43 log.go:181] (0xc00003a420) (0xc00055c000) Create stream\nI1014 23:06:03.644123 43 log.go:181] (0xc00003a420) (0xc00055c000) Stream added, broadcasting: 1\nI1014 23:06:03.646426 43 log.go:181] (0xc00003a420) Reply frame received for 1\nI1014 23:06:03.646476 43 log.go:181] (0xc00003a420) (0xc000cf0280) Create stream\nI1014 23:06:03.646493 43 log.go:181] (0xc00003a420) (0xc000cf0280) Stream added, broadcasting: 3\nI1014 23:06:03.647804 43 log.go:181] (0xc00003a420) Reply frame received for 3\nI1014 23:06:03.647837 43 log.go:181] (0xc00003a420) (0xc0009bc280) Create stream\nI1014 23:06:03.647849 43 log.go:181] (0xc00003a420) (0xc0009bc280) Stream added, broadcasting: 5\nI1014 23:06:03.648977 43 log.go:181] (0xc00003a420) Reply frame received for 5\nI1014 23:06:03.734546 43 log.go:181] (0xc00003a420) Data frame received for 5\nI1014 23:06:03.734568 43 log.go:181] (0xc0009bc280) (5) Data frame handling\nI1014 23:06:03.734581 43 log.go:181] (0xc0009bc280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:06:03.767238 43 log.go:181] (0xc00003a420) Data frame received for 3\nI1014 23:06:03.767407 43 log.go:181] (0xc000cf0280) (3) Data frame handling\nI1014 23:06:03.767526 43 log.go:181] (0xc000cf0280) (3) Data frame sent\nI1014 23:06:03.767934 43 log.go:181] (0xc00003a420) Data frame received for 3\nI1014 23:06:03.767975 43 log.go:181] (0xc000cf0280) (3) Data frame handling\nI1014 23:06:03.768012 43 log.go:181] (0xc00003a420) Data frame received for 5\nI1014 23:06:03.768037 43 log.go:181] (0xc0009bc280) (5) Data frame handling\nI1014 23:06:03.770052 43 log.go:181] (0xc00003a420) Data frame received for 1\nI1014 23:06:03.770076 43 log.go:181] (0xc00055c000) (1) Data frame handling\nI1014 23:06:03.770117 43 log.go:181] (0xc00055c000) (1) Data frame sent\nI1014 23:06:03.770144 43 log.go:181] (0xc00003a420) (0xc00055c000) Stream removed, broadcasting: 1\nI1014 23:06:03.770515 43 log.go:181] (0xc00003a420) Go away received\nI1014 23:06:03.770746 43 log.go:181] (0xc00003a420) (0xc00055c000) Stream removed, broadcasting: 1\nI1014 23:06:03.770778 43 log.go:181] (0xc00003a420) (0xc000cf0280) Stream removed, broadcasting: 3\nI1014 23:06:03.770797 43 log.go:181] (0xc00003a420) (0xc0009bc280) Stream removed, broadcasting: 5\n" Oct 14 23:06:03.778: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:06:03.778: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 14 23:06:13.811: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 14 23:06:23.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9860 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:06:24.127: INFO: stderr: "I1014 23:06:24.016943 61 log.go:181] (0xc000195ef0) (0xc00024abe0) Create stream\nI1014 23:06:24.017015 61 log.go:181] (0xc000195ef0) (0xc00024abe0) Stream added, broadcasting: 1\nI1014 23:06:24.020030 61 log.go:181] (0xc000195ef0) Reply frame received for 1\nI1014 23:06:24.020092 61 log.go:181] (0xc000195ef0) (0xc000c300a0) Create stream\nI1014 23:06:24.020137 61 log.go:181] (0xc000195ef0) (0xc000c300a0) Stream added, broadcasting: 3\nI1014 23:06:24.021210 61 log.go:181] (0xc000195ef0) Reply frame received for 3\nI1014 23:06:24.021250 61 log.go:181] (0xc000195ef0) (0xc000c30140) Create stream\nI1014 23:06:24.021265 61 log.go:181] (0xc000195ef0) (0xc000c30140) Stream added, broadcasting: 5\nI1014 23:06:24.022163 61 log.go:181] (0xc000195ef0) Reply frame received for 5\nI1014 23:06:24.118070 61 log.go:181] (0xc000195ef0) Data frame received for 5\nI1014 23:06:24.118100 61 log.go:181] (0xc000c30140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 23:06:24.118142 61 log.go:181] (0xc000195ef0) Data frame received for 3\nI1014 23:06:24.118196 61 log.go:181] (0xc000c300a0) (3) Data frame handling\nI1014 23:06:24.118221 61 log.go:181] (0xc000c300a0) (3) Data frame sent\nI1014 23:06:24.118250 61 log.go:181] (0xc000c30140) (5) Data frame sent\nI1014 23:06:24.118285 61 log.go:181] (0xc000195ef0) Data frame received for 5\nI1014 23:06:24.118304 61 log.go:181] (0xc000c30140) (5) Data frame handling\nI1014 23:06:24.118349 61 log.go:181] (0xc000195ef0) Data frame received for 3\nI1014 23:06:24.118371 61 log.go:181] (0xc000c300a0) (3) Data frame handling\nI1014 23:06:24.120241 61 log.go:181] (0xc000195ef0) Data frame received for 1\nI1014 23:06:24.120262 61 log.go:181] (0xc00024abe0) (1) Data frame handling\nI1014 23:06:24.120274 61 log.go:181] (0xc00024abe0) (1) Data frame sent\nI1014 23:06:24.120303 61 log.go:181] (0xc000195ef0) (0xc00024abe0) Stream removed, broadcasting: 1\nI1014 23:06:24.120325 61 log.go:181] (0xc000195ef0) Go away received\nI1014 23:06:24.120951 61 log.go:181] (0xc000195ef0) (0xc00024abe0) Stream removed, broadcasting: 1\nI1014 23:06:24.120985 61 log.go:181] (0xc000195ef0) (0xc000c300a0) Stream removed, broadcasting: 3\nI1014 23:06:24.121009 61 log.go:181] (0xc000195ef0) (0xc000c30140) Stream removed, broadcasting: 5\n" Oct 14 23:06:24.127: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:06:24.127: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:06:44.235: INFO: Waiting for StatefulSet statefulset-9860/ss2 to complete update Oct 14 23:06:44.235: INFO: Waiting for Pod statefulset-9860/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Oct 14 23:06:54.247: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9860 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:06:54.517: INFO: stderr: "I1014 23:06:54.385297 80 log.go:181] (0xc0007df3f0) (0xc0006aa6e0) Create stream\nI1014 23:06:54.385361 80 log.go:181] (0xc0007df3f0) (0xc0006aa6e0) Stream added, broadcasting: 1\nI1014 23:06:54.390598 80 log.go:181] (0xc0007df3f0) Reply frame received for 1\nI1014 23:06:54.390650 80 log.go:181] (0xc0007df3f0) (0xc00054e000) Create stream\nI1014 23:06:54.390664 80 log.go:181] (0xc0007df3f0) (0xc00054e000) Stream added, broadcasting: 3\nI1014 23:06:54.391577 80 log.go:181] (0xc0007df3f0) Reply frame received for 3\nI1014 23:06:54.391625 80 log.go:181] (0xc0007df3f0) (0xc0006aa000) Create stream\nI1014 23:06:54.391641 80 log.go:181] (0xc0007df3f0) (0xc0006aa000) Stream added, broadcasting: 5\nI1014 23:06:54.392497 80 log.go:181] (0xc0007df3f0) Reply frame received for 5\nI1014 23:06:54.455580 80 log.go:181] (0xc0007df3f0) Data frame received for 5\nI1014 23:06:54.455615 80 log.go:181] (0xc0006aa000) (5) Data frame handling\nI1014 23:06:54.455637 80 log.go:181] (0xc0006aa000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:06:54.507687 80 log.go:181] (0xc0007df3f0) Data frame received for 5\nI1014 23:06:54.507748 80 log.go:181] (0xc0006aa000) (5) Data frame handling\nI1014 23:06:54.507802 80 log.go:181] (0xc0007df3f0) Data frame received for 3\nI1014 23:06:54.507826 80 log.go:181] (0xc00054e000) (3) Data frame handling\nI1014 23:06:54.507861 80 log.go:181] (0xc00054e000) (3) Data frame sent\nI1014 23:06:54.507946 80 log.go:181] (0xc0007df3f0) Data frame received for 3\nI1014 23:06:54.507971 80 log.go:181] (0xc00054e000) (3) Data frame handling\nI1014 23:06:54.510275 80 log.go:181] (0xc0007df3f0) Data frame received for 1\nI1014 23:06:54.510309 80 log.go:181] (0xc0006aa6e0) (1) Data frame handling\nI1014 23:06:54.510335 80 log.go:181] (0xc0006aa6e0) (1) Data frame sent\nI1014 23:06:54.510357 80 log.go:181] (0xc0007df3f0) (0xc0006aa6e0) Stream removed, broadcasting: 1\nI1014 23:06:54.510376 80 log.go:181] (0xc0007df3f0) Go away received\nI1014 23:06:54.510996 80 log.go:181] (0xc0007df3f0) (0xc0006aa6e0) Stream removed, broadcasting: 1\nI1014 23:06:54.511032 80 log.go:181] (0xc0007df3f0) (0xc00054e000) Stream removed, broadcasting: 3\nI1014 23:06:54.511052 80 log.go:181] (0xc0007df3f0) (0xc0006aa000) Stream removed, broadcasting: 5\n" Oct 14 23:06:54.517: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:06:54.517: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:07:04.554: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 14 23:07:14.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9860 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:07:14.805: INFO: stderr: "I1014 23:07:14.732303 98 log.go:181] (0xc000f44000) (0xc0000292c0) Create stream\nI1014 23:07:14.732391 98 log.go:181] (0xc000f44000) (0xc0000292c0) Stream added, broadcasting: 1\nI1014 23:07:14.734487 98 log.go:181] (0xc000f44000) Reply frame received for 1\nI1014 23:07:14.734571 98 log.go:181] (0xc000f44000) (0xc000d5c0a0) Create stream\nI1014 23:07:14.734601 98 log.go:181] (0xc000f44000) (0xc000d5c0a0) Stream added, broadcasting: 3\nI1014 23:07:14.735971 98 log.go:181] (0xc000f44000) Reply frame received for 3\nI1014 23:07:14.736018 98 log.go:181] (0xc000f44000) (0xc00083cf00) Create stream\nI1014 23:07:14.736032 98 log.go:181] (0xc000f44000) (0xc00083cf00) Stream added, broadcasting: 5\nI1014 23:07:14.736921 98 log.go:181] (0xc000f44000) Reply frame received for 5\nI1014 23:07:14.797625 98 log.go:181] (0xc000f44000) Data frame received for 5\nI1014 23:07:14.797670 98 log.go:181] (0xc00083cf00) (5) Data frame handling\nI1014 23:07:14.797685 98 log.go:181] (0xc00083cf00) (5) Data frame sent\nI1014 23:07:14.797697 98 log.go:181] (0xc000f44000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 23:07:14.797707 98 log.go:181] (0xc00083cf00) (5) Data frame handling\nI1014 23:07:14.797742 98 log.go:181] (0xc000f44000) Data frame received for 3\nI1014 23:07:14.797762 98 log.go:181] (0xc000d5c0a0) (3) Data frame handling\nI1014 23:07:14.797771 98 log.go:181] (0xc000d5c0a0) (3) Data frame sent\nI1014 23:07:14.797778 98 log.go:181] (0xc000f44000) Data frame received for 3\nI1014 23:07:14.797784 98 log.go:181] (0xc000d5c0a0) (3) Data frame handling\nI1014 23:07:14.799284 98 log.go:181] (0xc000f44000) Data frame received for 1\nI1014 23:07:14.799312 98 log.go:181] (0xc0000292c0) (1) Data frame handling\nI1014 23:07:14.799341 98 log.go:181] (0xc0000292c0) (1) Data frame sent\nI1014 23:07:14.799364 98 log.go:181] (0xc000f44000) (0xc0000292c0) Stream removed, broadcasting: 1\nI1014 23:07:14.799491 98 log.go:181] (0xc000f44000) Go away received\nI1014 23:07:14.799784 98 log.go:181] (0xc000f44000) (0xc0000292c0) Stream removed, broadcasting: 1\nI1014 23:07:14.799805 98 log.go:181] (0xc000f44000) (0xc000d5c0a0) Stream removed, broadcasting: 3\nI1014 23:07:14.799814 98 log.go:181] (0xc000f44000) (0xc00083cf00) Stream removed, broadcasting: 5\n" Oct 14 23:07:14.805: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:07:14.805: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:07:24.826: INFO: Waiting for StatefulSet statefulset-9860/ss2 to complete update Oct 14 23:07:24.826: INFO: Waiting for Pod statefulset-9860/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 23:07:24.826: INFO: Waiting for Pod statefulset-9860/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 23:07:24.826: INFO: Waiting for Pod statefulset-9860/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 23:07:34.887: INFO: Waiting for StatefulSet statefulset-9860/ss2 to complete update Oct 14 23:07:34.887: INFO: Waiting for Pod statefulset-9860/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 23:07:34.887: INFO: Waiting for Pod statefulset-9860/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 23:07:45.051: INFO: Waiting for StatefulSet statefulset-9860/ss2 to complete update Oct 14 23:07:45.051: INFO: Waiting for Pod statefulset-9860/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 14 23:07:54.834: INFO: Waiting for StatefulSet statefulset-9860/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 23:08:04.835: INFO: Deleting all statefulset in ns statefulset-9860 Oct 14 23:08:04.838: INFO: Scaling statefulset ss2 to 0 Oct 14 23:08:34.876: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:08:34.882: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:08:34.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9860" for this suite. • [SLOW TEST:174.217 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":35,"skipped":663,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:08:34.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:08:35.006: INFO: Create a RollingUpdate DaemonSet Oct 14 23:08:35.011: INFO: Check that daemon pods launch on every node of the cluster Oct 14 23:08:35.030: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:08:35.056: INFO: Number of nodes with available pods: 0 Oct 14 23:08:35.056: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:08:36.062: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:08:36.066: INFO: Number of nodes with available pods: 0 Oct 14 23:08:36.066: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:08:37.062: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:08:37.067: INFO: Number of nodes with available pods: 0 Oct 14 23:08:37.067: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:08:38.062: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:08:38.065: INFO: Number of nodes with available pods: 0 Oct 14 23:08:38.065: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:08:39.062: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:08:39.065: INFO: Number of nodes with available pods: 1 Oct 14 23:08:39.065: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:08:40.070: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:08:40.074: INFO: Number of nodes with available pods: 2 Oct 14 23:08:40.074: INFO: Number of running nodes: 2, number of available pods: 2 Oct 14 23:08:40.074: INFO: Update the DaemonSet to trigger a rollout Oct 14 23:08:40.086: INFO: Updating DaemonSet daemon-set Oct 14 23:08:44.181: INFO: Roll back the DaemonSet before rollout is complete Oct 14 23:08:44.189: INFO: Updating DaemonSet daemon-set Oct 14 23:08:44.189: INFO: Make sure DaemonSet rollback is complete Oct 14 23:08:44.245: INFO: Wrong image for pod: daemon-set-zq9ps. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 14 23:08:44.245: INFO: Pod daemon-set-zq9ps is not available Oct 14 23:08:44.249: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:08:45.262: INFO: Wrong image for pod: daemon-set-zq9ps. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 14 23:08:45.262: INFO: Pod daemon-set-zq9ps is not available Oct 14 23:08:45.266: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:08:46.255: INFO: Pod daemon-set-6mssd is not available Oct 14 23:08:46.259: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3561, will wait for the garbage collector to delete the pods Oct 14 23:08:46.325: INFO: Deleting DaemonSet.extensions daemon-set took: 6.800365ms Oct 14 23:08:46.425: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.269924ms Oct 14 23:08:50.828: INFO: Number of nodes with available pods: 0 Oct 14 23:08:50.828: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 23:08:50.831: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3561/daemonsets","resourceVersion":"2946354"},"items":null} Oct 14 23:08:50.833: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3561/pods","resourceVersion":"2946354"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:08:50.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3561" for this suite. • [SLOW TEST:15.980 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":36,"skipped":671,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:08:50.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:08:51.721: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:08:53.730: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313731, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313731, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313731, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313731, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:08:56.792: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:08:56.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7757-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:08:58.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1922" for this suite. STEP: Destroying namespace "webhook-1922-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.253 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":37,"skipped":687,"failed":0} SSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:08:58.137: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 14 23:08:58.210: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 14 23:08:58.215: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 14 23:08:58.215: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 14 23:08:58.222: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 14 23:08:58.222: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 14 23:08:58.257: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 14 23:08:58.257: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 14 23:09:06.135: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:09:06.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3639" for this suite. • [SLOW TEST:8.053 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":38,"skipped":695,"failed":0} [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:09:06.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 14 23:09:06.263: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:09:15.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9947" for this suite. • [SLOW TEST:9.740 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":39,"skipped":695,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:09:15.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-4434/configmap-test-fd97ab1f-0fef-4ae9-b289-5eccc6c72442 STEP: Creating a pod to test consume configMaps Oct 14 23:09:16.278: INFO: Waiting up to 5m0s for pod "pod-configmaps-5992b3d9-39e5-40da-8dd2-b8afe168759e" in namespace "configmap-4434" to be "Succeeded or Failed" Oct 14 23:09:16.344: INFO: Pod "pod-configmaps-5992b3d9-39e5-40da-8dd2-b8afe168759e": Phase="Pending", Reason="", readiness=false. Elapsed: 66.453706ms Oct 14 23:09:18.350: INFO: Pod "pod-configmaps-5992b3d9-39e5-40da-8dd2-b8afe168759e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071661565s Oct 14 23:09:20.352: INFO: Pod "pod-configmaps-5992b3d9-39e5-40da-8dd2-b8afe168759e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074284892s STEP: Saw pod success Oct 14 23:09:20.352: INFO: Pod "pod-configmaps-5992b3d9-39e5-40da-8dd2-b8afe168759e" satisfied condition "Succeeded or Failed" Oct 14 23:09:20.355: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-5992b3d9-39e5-40da-8dd2-b8afe168759e container env-test: STEP: delete the pod Oct 14 23:09:20.396: INFO: Waiting for pod pod-configmaps-5992b3d9-39e5-40da-8dd2-b8afe168759e to disappear Oct 14 23:09:20.405: INFO: Pod pod-configmaps-5992b3d9-39e5-40da-8dd2-b8afe168759e no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:09:20.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4434" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":705,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:09:20.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-1068 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-1068 STEP: Deleting pre-stop pod Oct 14 23:09:33.518: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:09:33.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-1068" for this suite. • [SLOW TEST:13.150 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":41,"skipped":712,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:09:33.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 14 23:09:33.619: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:09:49.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3824" for this suite. • [SLOW TEST:15.957 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":42,"skipped":712,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:09:49.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 14 23:09:49.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9232' Oct 14 23:09:49.943: INFO: stderr: "" Oct 14 23:09:49.943: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 14 23:09:50.948: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:09:50.948: INFO: Found 0 / 1 Oct 14 23:09:51.948: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:09:51.948: INFO: Found 0 / 1 Oct 14 23:09:52.947: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:09:52.947: INFO: Found 0 / 1 Oct 14 23:09:53.947: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:09:53.947: INFO: Found 1 / 1 Oct 14 23:09:53.947: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 14 23:09:53.950: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:09:53.950: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 14 23:09:53.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config patch pod agnhost-primary-jbfsr --namespace=kubectl-9232 -p {"metadata":{"annotations":{"x":"y"}}}' Oct 14 23:09:54.070: INFO: stderr: "" Oct 14 23:09:54.070: INFO: stdout: "pod/agnhost-primary-jbfsr patched\n" STEP: checking annotations Oct 14 23:09:54.086: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:09:54.086: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:09:54.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9232" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":43,"skipped":717,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:09:54.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:09:58.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5484" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":44,"skipped":720,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:09:58.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-5xmvv in namespace proxy-1549 I1014 23:09:58.386798 7 runners.go:190] Created replication controller with name: proxy-service-5xmvv, namespace: proxy-1549, replica count: 1 I1014 23:09:59.437185 7 runners.go:190] proxy-service-5xmvv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:10:00.437407 7 runners.go:190] proxy-service-5xmvv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:10:01.437622 7 runners.go:190] proxy-service-5xmvv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:10:02.437799 7 runners.go:190] proxy-service-5xmvv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:10:03.438011 7 runners.go:190] proxy-service-5xmvv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1014 23:10:04.438211 7 runners.go:190] proxy-service-5xmvv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 23:10:04.442: INFO: setup took 6.123953746s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 14 23:10:04.448: INFO: (0) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 5.538084ms) Oct 14 23:10:04.453: INFO: (0) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 11.065057ms) Oct 14 23:10:04.454: INFO: (0) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 11.491988ms) Oct 14 23:10:04.454: INFO: (0) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 12.1959ms) Oct 14 23:10:04.454: INFO: (0) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 12.154786ms) Oct 14 23:10:04.454: INFO: (0) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 12.176383ms) Oct 14 23:10:04.455: INFO: (0) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 12.573083ms) Oct 14 23:10:04.455: INFO: (0) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 12.657457ms) Oct 14 23:10:04.455: INFO: (0) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 12.800878ms) Oct 14 23:10:04.455: INFO: (0) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 12.979961ms) Oct 14 23:10:04.464: INFO: (0) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 21.375618ms) Oct 14 23:10:04.465: INFO: (0) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 22.833239ms) Oct 14 23:10:04.465: INFO: (0) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 23.00844ms) Oct 14 23:10:04.465: INFO: (0) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 22.863277ms) Oct 14 23:10:04.465: INFO: (0) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 22.945696ms) Oct 14 23:10:04.468: INFO: (0) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: ... (200; 8.742215ms) Oct 14 23:10:04.477: INFO: (1) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 8.748336ms) Oct 14 23:10:04.477: INFO: (1) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 8.844068ms) Oct 14 23:10:04.477: INFO: (1) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test (200; 9.66696ms) Oct 14 23:10:04.478: INFO: (1) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 9.591876ms) Oct 14 23:10:04.478: INFO: (1) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 9.624513ms) Oct 14 23:10:04.478: INFO: (1) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 9.751282ms) Oct 14 23:10:04.478: INFO: (1) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 9.713028ms) Oct 14 23:10:04.478: INFO: (1) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 9.760189ms) Oct 14 23:10:04.484: INFO: (2) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test<... (200; 7.227157ms) Oct 14 23:10:04.485: INFO: (2) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 7.269606ms) Oct 14 23:10:04.485: INFO: (2) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 7.336066ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 8.484222ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 8.507449ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 8.533083ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 8.517309ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 8.503601ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 8.562596ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 8.674178ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 8.746047ms) Oct 14 23:10:04.486: INFO: (2) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 8.733291ms) Oct 14 23:10:04.487: INFO: (2) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 8.890781ms) Oct 14 23:10:04.487: INFO: (2) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 8.944284ms) Oct 14 23:10:04.503: INFO: (2) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 25.554283ms) Oct 14 23:10:04.507: INFO: (3) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.009802ms) Oct 14 23:10:04.508: INFO: (3) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 4.201839ms) Oct 14 23:10:04.508: INFO: (3) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 4.606824ms) Oct 14 23:10:04.508: INFO: (3) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 4.612982ms) Oct 14 23:10:04.508: INFO: (3) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.96566ms) Oct 14 23:10:04.508: INFO: (3) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: ... (200; 5.387464ms) Oct 14 23:10:04.509: INFO: (3) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 5.481953ms) Oct 14 23:10:04.509: INFO: (3) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 5.781456ms) Oct 14 23:10:04.510: INFO: (3) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 6.147006ms) Oct 14 23:10:04.510: INFO: (3) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 6.194042ms) Oct 14 23:10:04.510: INFO: (3) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 6.168412ms) Oct 14 23:10:04.510: INFO: (3) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 6.160666ms) Oct 14 23:10:04.510: INFO: (3) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 6.189918ms) Oct 14 23:10:04.510: INFO: (3) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 6.181296ms) Oct 14 23:10:04.510: INFO: (3) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 6.474901ms) Oct 14 23:10:04.514: INFO: (4) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 4.304575ms) Oct 14 23:10:04.515: INFO: (4) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 5.178088ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 5.648492ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 5.767607ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 5.865927ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 5.930019ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 5.859577ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 6.002889ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 5.960464ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 6.021095ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 6.056046ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 6.04005ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 6.263957ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 6.446245ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 6.393377ms) Oct 14 23:10:04.516: INFO: (4) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test (200; 3.618713ms) Oct 14 23:10:04.521: INFO: (5) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.048254ms) Oct 14 23:10:04.521: INFO: (5) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.007786ms) Oct 14 23:10:04.521: INFO: (5) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: ... (200; 4.055154ms) Oct 14 23:10:04.521: INFO: (5) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 4.185806ms) Oct 14 23:10:04.522: INFO: (5) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 5.095295ms) Oct 14 23:10:04.522: INFO: (5) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 5.154992ms) Oct 14 23:10:04.522: INFO: (5) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 5.319568ms) Oct 14 23:10:04.522: INFO: (5) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 5.310051ms) Oct 14 23:10:04.522: INFO: (5) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 5.352021ms) Oct 14 23:10:04.522: INFO: (5) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 5.443919ms) Oct 14 23:10:04.527: INFO: (6) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 4.630812ms) Oct 14 23:10:04.527: INFO: (6) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 4.675801ms) Oct 14 23:10:04.527: INFO: (6) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 4.77008ms) Oct 14 23:10:04.527: INFO: (6) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 4.653501ms) Oct 14 23:10:04.527: INFO: (6) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 4.798395ms) Oct 14 23:10:04.527: INFO: (6) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 4.797463ms) Oct 14 23:10:04.527: INFO: (6) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 5.043426ms) Oct 14 23:10:04.527: INFO: (6) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: ... (200; 5.681115ms) Oct 14 23:10:04.528: INFO: (6) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 5.88982ms) Oct 14 23:10:04.528: INFO: (6) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 5.930457ms) Oct 14 23:10:04.528: INFO: (6) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 6.019279ms) Oct 14 23:10:04.528: INFO: (6) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 5.989603ms) Oct 14 23:10:04.528: INFO: (6) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 5.93131ms) Oct 14 23:10:04.528: INFO: (6) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 6.044386ms) Oct 14 23:10:04.531: INFO: (7) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test<... (200; 10.683363ms) Oct 14 23:10:04.539: INFO: (7) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 10.646387ms) Oct 14 23:10:04.540: INFO: (7) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 11.27094ms) Oct 14 23:10:04.540: INFO: (7) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 11.285476ms) Oct 14 23:10:04.540: INFO: (7) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 11.539811ms) Oct 14 23:10:04.540: INFO: (7) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 11.580253ms) Oct 14 23:10:04.540: INFO: (7) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 11.575069ms) Oct 14 23:10:04.540: INFO: (7) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 11.627029ms) Oct 14 23:10:04.540: INFO: (7) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 11.79463ms) Oct 14 23:10:04.541: INFO: (7) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 12.2498ms) Oct 14 23:10:04.545: INFO: (8) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 4.098206ms) Oct 14 23:10:04.545: INFO: (8) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.245644ms) Oct 14 23:10:04.545: INFO: (8) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 4.308304ms) Oct 14 23:10:04.546: INFO: (8) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 5.529674ms) Oct 14 23:10:04.546: INFO: (8) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 5.712791ms) Oct 14 23:10:04.546: INFO: (8) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 5.672432ms) Oct 14 23:10:04.546: INFO: (8) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 5.782584ms) Oct 14 23:10:04.546: INFO: (8) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 5.790329ms) Oct 14 23:10:04.546: INFO: (8) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 5.805768ms) Oct 14 23:10:04.546: INFO: (8) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 5.781782ms) Oct 14 23:10:04.547: INFO: (8) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: ... (200; 6.187658ms) Oct 14 23:10:04.547: INFO: (8) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 6.191928ms) Oct 14 23:10:04.547: INFO: (8) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 6.237452ms) Oct 14 23:10:04.547: INFO: (8) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 6.316843ms) Oct 14 23:10:04.551: INFO: (9) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 3.991538ms) Oct 14 23:10:04.551: INFO: (9) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 3.991231ms) Oct 14 23:10:04.551: INFO: (9) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 3.997222ms) Oct 14 23:10:04.551: INFO: (9) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 4.077245ms) Oct 14 23:10:04.551: INFO: (9) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 3.994988ms) Oct 14 23:10:04.551: INFO: (9) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 4.096104ms) Oct 14 23:10:04.551: INFO: (9) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test<... (200; 4.728872ms) Oct 14 23:10:04.552: INFO: (9) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 4.790515ms) Oct 14 23:10:04.552: INFO: (9) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.871195ms) Oct 14 23:10:04.552: INFO: (9) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 4.903833ms) Oct 14 23:10:04.552: INFO: (9) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 4.864358ms) Oct 14 23:10:04.555: INFO: (10) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 2.848373ms) Oct 14 23:10:04.556: INFO: (10) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 3.856346ms) Oct 14 23:10:04.556: INFO: (10) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 4.098533ms) Oct 14 23:10:04.556: INFO: (10) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 4.217949ms) Oct 14 23:10:04.556: INFO: (10) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 4.215235ms) Oct 14 23:10:04.556: INFO: (10) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 4.203073ms) Oct 14 23:10:04.556: INFO: (10) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 4.245857ms) Oct 14 23:10:04.556: INFO: (10) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 4.283707ms) Oct 14 23:10:04.556: INFO: (10) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 4.284823ms) Oct 14 23:10:04.557: INFO: (10) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 4.432062ms) Oct 14 23:10:04.557: INFO: (10) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test<... (200; 2.348644ms) Oct 14 23:10:04.560: INFO: (11) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: ... (200; 4.424165ms) Oct 14 23:10:04.561: INFO: (11) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 4.437209ms) Oct 14 23:10:04.561: INFO: (11) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 4.42761ms) Oct 14 23:10:04.561: INFO: (11) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.456489ms) Oct 14 23:10:04.561: INFO: (11) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.489307ms) Oct 14 23:10:04.561: INFO: (11) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 4.514076ms) Oct 14 23:10:04.562: INFO: (11) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 4.918391ms) Oct 14 23:10:04.565: INFO: (12) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 2.741973ms) Oct 14 23:10:04.565: INFO: (12) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test<... (200; 4.233761ms) Oct 14 23:10:04.567: INFO: (12) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 4.819284ms) Oct 14 23:10:04.567: INFO: (12) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 5.023895ms) Oct 14 23:10:04.567: INFO: (12) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 5.13276ms) Oct 14 23:10:04.567: INFO: (12) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 5.234354ms) Oct 14 23:10:04.567: INFO: (12) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 5.350967ms) Oct 14 23:10:04.567: INFO: (12) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 5.339985ms) Oct 14 23:10:04.567: INFO: (12) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 5.419636ms) Oct 14 23:10:04.568: INFO: (12) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 6.245419ms) Oct 14 23:10:04.571: INFO: (12) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 9.158912ms) Oct 14 23:10:04.571: INFO: (12) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 9.224831ms) Oct 14 23:10:04.572: INFO: (12) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 10.297668ms) Oct 14 23:10:04.572: INFO: (12) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 10.378226ms) Oct 14 23:10:04.575: INFO: (13) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test<... (200; 4.637682ms) Oct 14 23:10:04.577: INFO: (13) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 4.714304ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 8.257066ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 8.282037ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 8.285893ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 8.523557ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 8.306829ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 8.483669ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 8.354431ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 8.545102ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 8.348333ms) Oct 14 23:10:04.581: INFO: (13) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 8.542844ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 4.011852ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 4.113445ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 4.039223ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 4.103492ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 4.075673ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 4.092556ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 4.173101ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 4.181388ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 4.452788ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.425982ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 4.3915ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 4.538368ms) Oct 14 23:10:04.585: INFO: (14) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 4.510099ms) Oct 14 23:10:04.586: INFO: (14) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 4.507139ms) Oct 14 23:10:04.586: INFO: (14) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 4.606518ms) Oct 14 23:10:04.586: INFO: (14) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test (200; 5.673206ms) Oct 14 23:10:04.591: INFO: (15) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 5.729512ms) Oct 14 23:10:04.591: INFO: (15) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 5.663797ms) Oct 14 23:10:04.591: INFO: (15) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 5.796531ms) Oct 14 23:10:04.591: INFO: (15) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 5.701449ms) Oct 14 23:10:04.591: INFO: (15) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 5.738764ms) Oct 14 23:10:04.591: INFO: (15) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 5.73312ms) Oct 14 23:10:04.591: INFO: (15) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: ... (200; 3.602141ms) Oct 14 23:10:04.595: INFO: (16) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test<... (200; 4.134063ms) Oct 14 23:10:04.596: INFO: (16) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 4.11778ms) Oct 14 23:10:04.596: INFO: (16) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 4.283715ms) Oct 14 23:10:04.596: INFO: (16) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 4.682569ms) Oct 14 23:10:04.596: INFO: (16) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 4.713911ms) Oct 14 23:10:04.596: INFO: (16) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 4.944581ms) Oct 14 23:10:04.597: INFO: (16) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 5.069733ms) Oct 14 23:10:04.597: INFO: (16) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 5.071107ms) Oct 14 23:10:04.600: INFO: (17) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 3.095615ms) Oct 14 23:10:04.600: INFO: (17) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 3.196799ms) Oct 14 23:10:04.600: INFO: (17) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 3.372135ms) Oct 14 23:10:04.600: INFO: (17) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: ... (200; 4.027414ms) Oct 14 23:10:04.601: INFO: (17) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname2/proxy/: bar (200; 4.825035ms) Oct 14 23:10:04.602: INFO: (17) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname1/proxy/: tls baz (200; 4.886589ms) Oct 14 23:10:04.602: INFO: (17) /api/v1/namespaces/proxy-1549/services/https:proxy-service-5xmvv:tlsportname2/proxy/: tls qux (200; 4.917221ms) Oct 14 23:10:04.602: INFO: (17) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 4.982274ms) Oct 14 23:10:04.602: INFO: (17) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname1/proxy/: foo (200; 5.040426ms) Oct 14 23:10:04.602: INFO: (17) /api/v1/namespaces/proxy-1549/services/http:proxy-service-5xmvv:portname1/proxy/: foo (200; 5.123689ms) Oct 14 23:10:04.605: INFO: (18) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 3.47942ms) Oct 14 23:10:04.606: INFO: (18) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l/proxy/: test (200; 3.707216ms) Oct 14 23:10:04.606: INFO: (18) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:462/proxy/: tls qux (200; 3.78894ms) Oct 14 23:10:04.606: INFO: (18) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 3.923878ms) Oct 14 23:10:04.606: INFO: (18) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 3.907614ms) Oct 14 23:10:04.606: INFO: (18) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 3.809221ms) Oct 14 23:10:04.606: INFO: (18) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:460/proxy/: tls baz (200; 3.922027ms) Oct 14 23:10:04.606: INFO: (18) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:1080/proxy/: test<... (200; 4.075647ms) Oct 14 23:10:04.607: INFO: (18) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test (200; 7.10787ms) Oct 14 23:10:04.615: INFO: (19) /api/v1/namespaces/proxy-1549/services/proxy-service-5xmvv:portname2/proxy/: bar (200; 7.03569ms) Oct 14 23:10:04.615: INFO: (19) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:1080/proxy/: ... (200; 7.037021ms) Oct 14 23:10:04.615: INFO: (19) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:160/proxy/: foo (200; 7.212997ms) Oct 14 23:10:04.615: INFO: (19) /api/v1/namespaces/proxy-1549/pods/http:proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 7.269168ms) Oct 14 23:10:04.615: INFO: (19) /api/v1/namespaces/proxy-1549/pods/proxy-service-5xmvv-w7c5l:162/proxy/: bar (200; 7.3012ms) Oct 14 23:10:04.615: INFO: (19) /api/v1/namespaces/proxy-1549/pods/https:proxy-service-5xmvv-w7c5l:443/proxy/: test<... (200; 7.436142ms) STEP: deleting ReplicationController proxy-service-5xmvv in namespace proxy-1549, will wait for the garbage collector to delete the pods Oct 14 23:10:04.675: INFO: Deleting ReplicationController proxy-service-5xmvv took: 7.826808ms Oct 14 23:10:04.775: INFO: Terminating ReplicationController proxy-service-5xmvv pods took: 100.23104ms [AfterEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:10:09.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1549" for this suite. • [SLOW TEST:11.708 seconds] [sig-network] Proxy /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":45,"skipped":725,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:10:09.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:10:10.665: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:10:12.772: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313810, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313810, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313810, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313810, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:10:15.817: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:10:15.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2298-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:10:16.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4712" for this suite. STEP: Destroying namespace "webhook-4712-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.105 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":46,"skipped":741,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:10:17.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:10:17.057: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:10:23.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3924" for this suite. • [SLOW TEST:6.423 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":47,"skipped":745,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:10:23.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-1c12f653-f2a7-484a-afcb-048c22623e63 STEP: Creating a pod to test consume configMaps Oct 14 23:10:23.550: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-42ba22d3-e0d8-43d2-90c4-1ed87c530046" in namespace "projected-3690" to be "Succeeded or Failed" Oct 14 23:10:23.573: INFO: Pod "pod-projected-configmaps-42ba22d3-e0d8-43d2-90c4-1ed87c530046": Phase="Pending", Reason="", readiness=false. Elapsed: 23.129236ms Oct 14 23:10:25.577: INFO: Pod "pod-projected-configmaps-42ba22d3-e0d8-43d2-90c4-1ed87c530046": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027099634s Oct 14 23:10:27.582: INFO: Pod "pod-projected-configmaps-42ba22d3-e0d8-43d2-90c4-1ed87c530046": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031884635s STEP: Saw pod success Oct 14 23:10:27.582: INFO: Pod "pod-projected-configmaps-42ba22d3-e0d8-43d2-90c4-1ed87c530046" satisfied condition "Succeeded or Failed" Oct 14 23:10:27.585: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-42ba22d3-e0d8-43d2-90c4-1ed87c530046 container projected-configmap-volume-test: STEP: delete the pod Oct 14 23:10:27.632: INFO: Waiting for pod pod-projected-configmaps-42ba22d3-e0d8-43d2-90c4-1ed87c530046 to disappear Oct 14 23:10:27.694: INFO: Pod pod-projected-configmaps-42ba22d3-e0d8-43d2-90c4-1ed87c530046 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:10:27.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3690" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":747,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:10:27.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:10:28.365: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:10:30.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313828, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313828, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313828, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313828, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:10:33.407: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:10:33.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3330" for this suite. STEP: Destroying namespace "webhook-3330-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.194 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":49,"skipped":758,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:10:33.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Oct 14 23:10:34.207: INFO: Waiting up to 5m0s for pod "pod-02094c2e-1d3f-4bbd-b0cb-8d7f5756c09e" in namespace "emptydir-6234" to be "Succeeded or Failed" Oct 14 23:10:34.444: INFO: Pod "pod-02094c2e-1d3f-4bbd-b0cb-8d7f5756c09e": Phase="Pending", Reason="", readiness=false. Elapsed: 236.273063ms Oct 14 23:10:36.447: INFO: Pod "pod-02094c2e-1d3f-4bbd-b0cb-8d7f5756c09e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.239945255s Oct 14 23:10:38.476: INFO: Pod "pod-02094c2e-1d3f-4bbd-b0cb-8d7f5756c09e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.268686042s STEP: Saw pod success Oct 14 23:10:38.476: INFO: Pod "pod-02094c2e-1d3f-4bbd-b0cb-8d7f5756c09e" satisfied condition "Succeeded or Failed" Oct 14 23:10:38.482: INFO: Trying to get logs from node leguer-worker pod pod-02094c2e-1d3f-4bbd-b0cb-8d7f5756c09e container test-container: STEP: delete the pod Oct 14 23:10:38.595: INFO: Waiting for pod pod-02094c2e-1d3f-4bbd-b0cb-8d7f5756c09e to disappear Oct 14 23:10:38.614: INFO: Pod pod-02094c2e-1d3f-4bbd-b0cb-8d7f5756c09e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:10:38.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6234" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":50,"skipped":788,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:10:38.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 14 23:10:38.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5262' Oct 14 23:10:39.255: INFO: stderr: "" Oct 14 23:10:39.255: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 14 23:10:39.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5262' Oct 14 23:10:39.403: INFO: stderr: "" Oct 14 23:10:39.403: INFO: stdout: "update-demo-nautilus-bdjsz update-demo-nautilus-mtnhq " Oct 14 23:10:39.403: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdjsz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:39.526: INFO: stderr: "" Oct 14 23:10:39.526: INFO: stdout: "" Oct 14 23:10:39.526: INFO: update-demo-nautilus-bdjsz is created but not running Oct 14 23:10:44.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5262' Oct 14 23:10:44.634: INFO: stderr: "" Oct 14 23:10:44.634: INFO: stdout: "update-demo-nautilus-bdjsz update-demo-nautilus-mtnhq " Oct 14 23:10:44.634: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdjsz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:44.738: INFO: stderr: "" Oct 14 23:10:44.738: INFO: stdout: "true" Oct 14 23:10:44.738: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdjsz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:44.840: INFO: stderr: "" Oct 14 23:10:44.840: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 23:10:44.840: INFO: validating pod update-demo-nautilus-bdjsz Oct 14 23:10:44.845: INFO: got data: { "image": "nautilus.jpg" } Oct 14 23:10:44.845: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 23:10:44.845: INFO: update-demo-nautilus-bdjsz is verified up and running Oct 14 23:10:44.845: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtnhq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:44.939: INFO: stderr: "" Oct 14 23:10:44.939: INFO: stdout: "true" Oct 14 23:10:44.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtnhq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:45.032: INFO: stderr: "" Oct 14 23:10:45.032: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 23:10:45.032: INFO: validating pod update-demo-nautilus-mtnhq Oct 14 23:10:45.037: INFO: got data: { "image": "nautilus.jpg" } Oct 14 23:10:45.037: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 23:10:45.037: INFO: update-demo-nautilus-mtnhq is verified up and running STEP: scaling down the replication controller Oct 14 23:10:45.041: INFO: scanned /root for discovery docs: Oct 14 23:10:45.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5262' Oct 14 23:10:46.168: INFO: stderr: "" Oct 14 23:10:46.168: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 14 23:10:46.168: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5262' Oct 14 23:10:46.279: INFO: stderr: "" Oct 14 23:10:46.279: INFO: stdout: "update-demo-nautilus-bdjsz update-demo-nautilus-mtnhq " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 14 23:10:51.279: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5262' Oct 14 23:10:51.390: INFO: stderr: "" Oct 14 23:10:51.390: INFO: stdout: "update-demo-nautilus-mtnhq " Oct 14 23:10:51.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtnhq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:51.492: INFO: stderr: "" Oct 14 23:10:51.492: INFO: stdout: "true" Oct 14 23:10:51.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtnhq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:51.617: INFO: stderr: "" Oct 14 23:10:51.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 23:10:51.617: INFO: validating pod update-demo-nautilus-mtnhq Oct 14 23:10:51.621: INFO: got data: { "image": "nautilus.jpg" } Oct 14 23:10:51.621: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 23:10:51.621: INFO: update-demo-nautilus-mtnhq is verified up and running STEP: scaling up the replication controller Oct 14 23:10:51.624: INFO: scanned /root for discovery docs: Oct 14 23:10:51.624: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5262' Oct 14 23:10:52.744: INFO: stderr: "" Oct 14 23:10:52.744: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 14 23:10:52.744: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5262' Oct 14 23:10:52.852: INFO: stderr: "" Oct 14 23:10:52.852: INFO: stdout: "update-demo-nautilus-mtnhq update-demo-nautilus-wd7dx " Oct 14 23:10:52.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtnhq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:52.953: INFO: stderr: "" Oct 14 23:10:52.953: INFO: stdout: "true" Oct 14 23:10:52.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtnhq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:53.047: INFO: stderr: "" Oct 14 23:10:53.047: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 23:10:53.047: INFO: validating pod update-demo-nautilus-mtnhq Oct 14 23:10:53.050: INFO: got data: { "image": "nautilus.jpg" } Oct 14 23:10:53.050: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 23:10:53.050: INFO: update-demo-nautilus-mtnhq is verified up and running Oct 14 23:10:53.050: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wd7dx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:53.194: INFO: stderr: "" Oct 14 23:10:53.194: INFO: stdout: "" Oct 14 23:10:53.194: INFO: update-demo-nautilus-wd7dx is created but not running Oct 14 23:10:58.195: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5262' Oct 14 23:10:58.310: INFO: stderr: "" Oct 14 23:10:58.310: INFO: stdout: "update-demo-nautilus-mtnhq update-demo-nautilus-wd7dx " Oct 14 23:10:58.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtnhq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:58.409: INFO: stderr: "" Oct 14 23:10:58.409: INFO: stdout: "true" Oct 14 23:10:58.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mtnhq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:58.505: INFO: stderr: "" Oct 14 23:10:58.505: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 23:10:58.505: INFO: validating pod update-demo-nautilus-mtnhq Oct 14 23:10:58.508: INFO: got data: { "image": "nautilus.jpg" } Oct 14 23:10:58.508: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 23:10:58.508: INFO: update-demo-nautilus-mtnhq is verified up and running Oct 14 23:10:58.508: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wd7dx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:58.607: INFO: stderr: "" Oct 14 23:10:58.607: INFO: stdout: "true" Oct 14 23:10:58.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wd7dx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5262' Oct 14 23:10:58.705: INFO: stderr: "" Oct 14 23:10:58.705: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 23:10:58.705: INFO: validating pod update-demo-nautilus-wd7dx Oct 14 23:10:58.709: INFO: got data: { "image": "nautilus.jpg" } Oct 14 23:10:58.709: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 23:10:58.709: INFO: update-demo-nautilus-wd7dx is verified up and running STEP: using delete to clean up resources Oct 14 23:10:58.709: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5262' Oct 14 23:10:58.830: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 23:10:58.830: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 14 23:10:58.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5262' Oct 14 23:10:58.938: INFO: stderr: "No resources found in kubectl-5262 namespace.\n" Oct 14 23:10:58.938: INFO: stdout: "" Oct 14 23:10:58.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5262 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 14 23:10:59.051: INFO: stderr: "" Oct 14 23:10:59.051: INFO: stdout: "update-demo-nautilus-mtnhq\nupdate-demo-nautilus-wd7dx\n" Oct 14 23:10:59.551: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5262' Oct 14 23:10:59.700: INFO: stderr: "No resources found in kubectl-5262 namespace.\n" Oct 14 23:10:59.700: INFO: stdout: "" Oct 14 23:10:59.700: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5262 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 14 23:10:59.831: INFO: stderr: "" Oct 14 23:10:59.831: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:10:59.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5262" for this suite. • [SLOW TEST:21.214 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":51,"skipped":792,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:10:59.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:11:01.031: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:11:03.041: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313861, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313861, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313861, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313860, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:11:06.085: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 14 23:11:10.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config attach --namespace=webhook-9939 to-be-attached-pod -i -c=container1' Oct 14 23:11:10.283: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:11:10.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9939" for this suite. STEP: Destroying namespace "webhook-9939-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.559 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":52,"skipped":793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:11:10.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:11:26.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9375" for this suite. • [SLOW TEST:16.293 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":53,"skipped":841,"failed":0} [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:11:26.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Oct 14 23:11:26.779: INFO: Waiting up to 5m0s for pod "var-expansion-e6b41e8d-f779-4a1a-99a9-51cb393be70e" in namespace "var-expansion-3081" to be "Succeeded or Failed" Oct 14 23:11:26.807: INFO: Pod "var-expansion-e6b41e8d-f779-4a1a-99a9-51cb393be70e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.646761ms Oct 14 23:11:28.852: INFO: Pod "var-expansion-e6b41e8d-f779-4a1a-99a9-51cb393be70e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072958592s Oct 14 23:11:30.856: INFO: Pod "var-expansion-e6b41e8d-f779-4a1a-99a9-51cb393be70e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07677253s STEP: Saw pod success Oct 14 23:11:30.856: INFO: Pod "var-expansion-e6b41e8d-f779-4a1a-99a9-51cb393be70e" satisfied condition "Succeeded or Failed" Oct 14 23:11:30.859: INFO: Trying to get logs from node leguer-worker pod var-expansion-e6b41e8d-f779-4a1a-99a9-51cb393be70e container dapi-container: STEP: delete the pod Oct 14 23:11:30.911: INFO: Waiting for pod var-expansion-e6b41e8d-f779-4a1a-99a9-51cb393be70e to disappear Oct 14 23:11:30.919: INFO: Pod var-expansion-e6b41e8d-f779-4a1a-99a9-51cb393be70e no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:11:30.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3081" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":54,"skipped":841,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:11:30.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:11:36.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1939" for this suite. • [SLOW TEST:5.464 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":55,"skipped":855,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:11:36.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:11:36.473: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52c38fbd-ec3c-4adc-9c09-fae740bbc08a" in namespace "downward-api-3656" to be "Succeeded or Failed" Oct 14 23:11:36.483: INFO: Pod "downwardapi-volume-52c38fbd-ec3c-4adc-9c09-fae740bbc08a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.535652ms Oct 14 23:11:38.486: INFO: Pod "downwardapi-volume-52c38fbd-ec3c-4adc-9c09-fae740bbc08a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012426298s Oct 14 23:11:40.606: INFO: Pod "downwardapi-volume-52c38fbd-ec3c-4adc-9c09-fae740bbc08a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.132696549s STEP: Saw pod success Oct 14 23:11:40.606: INFO: Pod "downwardapi-volume-52c38fbd-ec3c-4adc-9c09-fae740bbc08a" satisfied condition "Succeeded or Failed" Oct 14 23:11:40.609: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-52c38fbd-ec3c-4adc-9c09-fae740bbc08a container client-container: STEP: delete the pod Oct 14 23:11:40.667: INFO: Waiting for pod downwardapi-volume-52c38fbd-ec3c-4adc-9c09-fae740bbc08a to disappear Oct 14 23:11:40.688: INFO: Pod downwardapi-volume-52c38fbd-ec3c-4adc-9c09-fae740bbc08a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:11:40.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3656" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":56,"skipped":862,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:11:40.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:11:41.235: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:11:43.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313901, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313901, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313901, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313901, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 23:11:45.278: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313901, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313901, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313901, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313901, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:11:48.304: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:11:48.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5387" for this suite. STEP: Destroying namespace "webhook-5387-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.897 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":57,"skipped":871,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:11:48.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 14 23:11:53.259: INFO: Successfully updated pod "annotationupdate5d7d854d-b389-4190-bb18-32a4949eaa83" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:11:55.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7416" for this suite. • [SLOW TEST:6.732 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":876,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:11:55.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-4050 STEP: creating replication controller nodeport-test in namespace services-4050 I1014 23:11:55.486694 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-4050, replica count: 2 I1014 23:11:58.537292 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:12:01.537486 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 23:12:01.537: INFO: Creating new exec pod Oct 14 23:12:06.564: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-4050 execpod9jcx8 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Oct 14 23:12:06.846: INFO: stderr: "I1014 23:12:06.733323 679 log.go:181] (0xc00043cc60) (0xc0005560a0) Create stream\nI1014 23:12:06.733430 679 log.go:181] (0xc00043cc60) (0xc0005560a0) Stream added, broadcasting: 1\nI1014 23:12:06.735529 679 log.go:181] (0xc00043cc60) Reply frame received for 1\nI1014 23:12:06.735562 679 log.go:181] (0xc00043cc60) (0xc000be4a00) Create stream\nI1014 23:12:06.735571 679 log.go:181] (0xc00043cc60) (0xc000be4a00) Stream added, broadcasting: 3\nI1014 23:12:06.736264 679 log.go:181] (0xc00043cc60) Reply frame received for 3\nI1014 23:12:06.736285 679 log.go:181] (0xc00043cc60) (0xc000be5360) Create stream\nI1014 23:12:06.736292 679 log.go:181] (0xc00043cc60) (0xc000be5360) Stream added, broadcasting: 5\nI1014 23:12:06.736973 679 log.go:181] (0xc00043cc60) Reply frame received for 5\nI1014 23:12:06.836336 679 log.go:181] (0xc00043cc60) Data frame received for 5\nI1014 23:12:06.836373 679 log.go:181] (0xc000be5360) (5) Data frame handling\nI1014 23:12:06.836386 679 log.go:181] (0xc000be5360) (5) Data frame sent\nI1014 23:12:06.836396 679 log.go:181] (0xc00043cc60) Data frame received for 5\nI1014 23:12:06.836405 679 log.go:181] (0xc000be5360) (5) Data frame handling\nI1014 23:12:06.836417 679 log.go:181] (0xc00043cc60) Data frame received for 3\nI1014 23:12:06.836437 679 log.go:181] (0xc000be4a00) (3) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1014 23:12:06.839010 679 log.go:181] (0xc00043cc60) Data frame received for 1\nI1014 23:12:06.839043 679 log.go:181] (0xc0005560a0) (1) Data frame handling\nI1014 23:12:06.839063 679 log.go:181] (0xc0005560a0) (1) Data frame sent\nI1014 23:12:06.839087 679 log.go:181] (0xc00043cc60) (0xc0005560a0) Stream removed, broadcasting: 1\nI1014 23:12:06.839113 679 log.go:181] (0xc00043cc60) Go away received\nI1014 23:12:06.839501 679 log.go:181] (0xc00043cc60) (0xc0005560a0) Stream removed, broadcasting: 1\nI1014 23:12:06.839526 679 log.go:181] (0xc00043cc60) (0xc000be4a00) Stream removed, broadcasting: 3\nI1014 23:12:06.839540 679 log.go:181] (0xc00043cc60) (0xc000be5360) Stream removed, broadcasting: 5\n" Oct 14 23:12:06.846: INFO: stdout: "" Oct 14 23:12:06.847: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-4050 execpod9jcx8 -- /bin/sh -x -c nc -zv -t -w 2 10.105.246.65 80' Oct 14 23:12:07.045: INFO: stderr: "I1014 23:12:06.973246 697 log.go:181] (0xc000860dc0) (0xc0005c8be0) Create stream\nI1014 23:12:06.973324 697 log.go:181] (0xc000860dc0) (0xc0005c8be0) Stream added, broadcasting: 1\nI1014 23:12:06.978837 697 log.go:181] (0xc000860dc0) Reply frame received for 1\nI1014 23:12:06.978895 697 log.go:181] (0xc000860dc0) (0xc0005c85a0) Create stream\nI1014 23:12:06.978914 697 log.go:181] (0xc000860dc0) (0xc0005c85a0) Stream added, broadcasting: 3\nI1014 23:12:06.980235 697 log.go:181] (0xc000860dc0) Reply frame received for 3\nI1014 23:12:06.980275 697 log.go:181] (0xc000860dc0) (0xc0004fe000) Create stream\nI1014 23:12:06.980290 697 log.go:181] (0xc000860dc0) (0xc0004fe000) Stream added, broadcasting: 5\nI1014 23:12:06.981633 697 log.go:181] (0xc000860dc0) Reply frame received for 5\nI1014 23:12:07.037155 697 log.go:181] (0xc000860dc0) Data frame received for 5\nI1014 23:12:07.037204 697 log.go:181] (0xc0004fe000) (5) Data frame handling\nI1014 23:12:07.037221 697 log.go:181] (0xc0004fe000) (5) Data frame sent\nI1014 23:12:07.037235 697 log.go:181] (0xc000860dc0) Data frame received for 5\nI1014 23:12:07.037248 697 log.go:181] (0xc0004fe000) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.246.65 80\nConnection to 10.105.246.65 80 port [tcp/http] succeeded!\nI1014 23:12:07.037280 697 log.go:181] (0xc000860dc0) Data frame received for 3\nI1014 23:12:07.037296 697 log.go:181] (0xc0005c85a0) (3) Data frame handling\nI1014 23:12:07.038507 697 log.go:181] (0xc000860dc0) Data frame received for 1\nI1014 23:12:07.038539 697 log.go:181] (0xc0005c8be0) (1) Data frame handling\nI1014 23:12:07.038580 697 log.go:181] (0xc0005c8be0) (1) Data frame sent\nI1014 23:12:07.038674 697 log.go:181] (0xc000860dc0) (0xc0005c8be0) Stream removed, broadcasting: 1\nI1014 23:12:07.038728 697 log.go:181] (0xc000860dc0) Go away received\nI1014 23:12:07.039287 697 log.go:181] (0xc000860dc0) (0xc0005c8be0) Stream removed, broadcasting: 1\nI1014 23:12:07.039307 697 log.go:181] (0xc000860dc0) (0xc0005c85a0) Stream removed, broadcasting: 3\nI1014 23:12:07.039317 697 log.go:181] (0xc000860dc0) (0xc0004fe000) Stream removed, broadcasting: 5\n" Oct 14 23:12:07.045: INFO: stdout: "" Oct 14 23:12:07.045: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-4050 execpod9jcx8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 32255' Oct 14 23:12:07.266: INFO: stderr: "I1014 23:12:07.184502 715 log.go:181] (0xc0009a4000) (0xc000c661e0) Create stream\nI1014 23:12:07.184567 715 log.go:181] (0xc0009a4000) (0xc000c661e0) Stream added, broadcasting: 1\nI1014 23:12:07.187081 715 log.go:181] (0xc0009a4000) Reply frame received for 1\nI1014 23:12:07.187131 715 log.go:181] (0xc0009a4000) (0xc00044d180) Create stream\nI1014 23:12:07.187146 715 log.go:181] (0xc0009a4000) (0xc00044d180) Stream added, broadcasting: 3\nI1014 23:12:07.187998 715 log.go:181] (0xc0009a4000) Reply frame received for 3\nI1014 23:12:07.188030 715 log.go:181] (0xc0009a4000) (0xc000e20000) Create stream\nI1014 23:12:07.188040 715 log.go:181] (0xc0009a4000) (0xc000e20000) Stream added, broadcasting: 5\nI1014 23:12:07.188974 715 log.go:181] (0xc0009a4000) Reply frame received for 5\nI1014 23:12:07.259972 715 log.go:181] (0xc0009a4000) Data frame received for 5\nI1014 23:12:07.260005 715 log.go:181] (0xc000e20000) (5) Data frame handling\nI1014 23:12:07.260032 715 log.go:181] (0xc000e20000) (5) Data frame sent\nI1014 23:12:07.260041 715 log.go:181] (0xc0009a4000) Data frame received for 5\nI1014 23:12:07.260047 715 log.go:181] (0xc000e20000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.18 32255\nConnection to 172.18.0.18 32255 port [tcp/32255] succeeded!\nI1014 23:12:07.260078 715 log.go:181] (0xc0009a4000) Data frame received for 3\nI1014 23:12:07.260085 715 log.go:181] (0xc00044d180) (3) Data frame handling\nI1014 23:12:07.261585 715 log.go:181] (0xc0009a4000) Data frame received for 1\nI1014 23:12:07.261598 715 log.go:181] (0xc000c661e0) (1) Data frame handling\nI1014 23:12:07.261610 715 log.go:181] (0xc000c661e0) (1) Data frame sent\nI1014 23:12:07.261627 715 log.go:181] (0xc0009a4000) (0xc000c661e0) Stream removed, broadcasting: 1\nI1014 23:12:07.261672 715 log.go:181] (0xc0009a4000) Go away received\nI1014 23:12:07.261918 715 log.go:181] (0xc0009a4000) (0xc000c661e0) Stream removed, broadcasting: 1\nI1014 23:12:07.261937 715 log.go:181] (0xc0009a4000) (0xc00044d180) Stream removed, broadcasting: 3\nI1014 23:12:07.261948 715 log.go:181] (0xc0009a4000) (0xc000e20000) Stream removed, broadcasting: 5\n" Oct 14 23:12:07.267: INFO: stdout: "" Oct 14 23:12:07.267: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-4050 execpod9jcx8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 32255' Oct 14 23:12:07.488: INFO: stderr: "I1014 23:12:07.408117 733 log.go:181] (0xc000866000) (0xc00085e140) Create stream\nI1014 23:12:07.408188 733 log.go:181] (0xc000866000) (0xc00085e140) Stream added, broadcasting: 1\nI1014 23:12:07.413440 733 log.go:181] (0xc000866000) Reply frame received for 1\nI1014 23:12:07.413495 733 log.go:181] (0xc000866000) (0xc0009cc000) Create stream\nI1014 23:12:07.413510 733 log.go:181] (0xc000866000) (0xc0009cc000) Stream added, broadcasting: 3\nI1014 23:12:07.414488 733 log.go:181] (0xc000866000) Reply frame received for 3\nI1014 23:12:07.414521 733 log.go:181] (0xc000866000) (0xc00062eaa0) Create stream\nI1014 23:12:07.414539 733 log.go:181] (0xc000866000) (0xc00062eaa0) Stream added, broadcasting: 5\nI1014 23:12:07.415307 733 log.go:181] (0xc000866000) Reply frame received for 5\nI1014 23:12:07.481290 733 log.go:181] (0xc000866000) Data frame received for 3\nI1014 23:12:07.481334 733 log.go:181] (0xc0009cc000) (3) Data frame handling\nI1014 23:12:07.481392 733 log.go:181] (0xc000866000) Data frame received for 5\nI1014 23:12:07.481426 733 log.go:181] (0xc00062eaa0) (5) Data frame handling\nI1014 23:12:07.481442 733 log.go:181] (0xc00062eaa0) (5) Data frame sent\nI1014 23:12:07.481454 733 log.go:181] (0xc000866000) Data frame received for 5\nI1014 23:12:07.481462 733 log.go:181] (0xc00062eaa0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.17 32255\nConnection to 172.18.0.17 32255 port [tcp/32255] succeeded!\nI1014 23:12:07.482387 733 log.go:181] (0xc000866000) Data frame received for 1\nI1014 23:12:07.482408 733 log.go:181] (0xc00085e140) (1) Data frame handling\nI1014 23:12:07.482419 733 log.go:181] (0xc00085e140) (1) Data frame sent\nI1014 23:12:07.482429 733 log.go:181] (0xc000866000) (0xc00085e140) Stream removed, broadcasting: 1\nI1014 23:12:07.482439 733 log.go:181] (0xc000866000) Go away received\nI1014 23:12:07.482879 733 log.go:181] (0xc000866000) (0xc00085e140) Stream removed, broadcasting: 1\nI1014 23:12:07.482898 733 log.go:181] (0xc000866000) (0xc0009cc000) Stream removed, broadcasting: 3\nI1014 23:12:07.482908 733 log.go:181] (0xc000866000) (0xc00062eaa0) Stream removed, broadcasting: 5\n" Oct 14 23:12:07.488: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:07.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4050" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.168 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":59,"skipped":923,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:07.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 14 23:12:08.032: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 14 23:12:10.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313928, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313928, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313928, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313928, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 23:12:12.858: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313928, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313928, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313928, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313928, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:12:15.919: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:12:15.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:17.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-840" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.699 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":60,"skipped":926,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:17.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 14 23:12:17.327: INFO: Waiting up to 5m0s for pod "pod-12ffd442-3625-4abd-8b36-5833f7ec078e" in namespace "emptydir-128" to be "Succeeded or Failed" Oct 14 23:12:17.347: INFO: Pod "pod-12ffd442-3625-4abd-8b36-5833f7ec078e": Phase="Pending", Reason="", readiness=false. Elapsed: 20.297758ms Oct 14 23:12:19.353: INFO: Pod "pod-12ffd442-3625-4abd-8b36-5833f7ec078e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025889342s Oct 14 23:12:21.358: INFO: Pod "pod-12ffd442-3625-4abd-8b36-5833f7ec078e": Phase="Running", Reason="", readiness=true. Elapsed: 4.030699335s Oct 14 23:12:23.362: INFO: Pod "pod-12ffd442-3625-4abd-8b36-5833f7ec078e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03526131s STEP: Saw pod success Oct 14 23:12:23.362: INFO: Pod "pod-12ffd442-3625-4abd-8b36-5833f7ec078e" satisfied condition "Succeeded or Failed" Oct 14 23:12:23.365: INFO: Trying to get logs from node leguer-worker pod pod-12ffd442-3625-4abd-8b36-5833f7ec078e container test-container: STEP: delete the pod Oct 14 23:12:23.397: INFO: Waiting for pod pod-12ffd442-3625-4abd-8b36-5833f7ec078e to disappear Oct 14 23:12:23.407: INFO: Pod pod-12ffd442-3625-4abd-8b36-5833f7ec078e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:23.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-128" for this suite. • [SLOW TEST:6.219 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":61,"skipped":930,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:23.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:12:23.487: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-5cb15b5e-db75-4399-bc3c-5fb7ddd1568c" in namespace "security-context-test-6520" to be "Succeeded or Failed" Oct 14 23:12:23.514: INFO: Pod "alpine-nnp-false-5cb15b5e-db75-4399-bc3c-5fb7ddd1568c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.963482ms Oct 14 23:12:25.518: INFO: Pod "alpine-nnp-false-5cb15b5e-db75-4399-bc3c-5fb7ddd1568c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031197991s Oct 14 23:12:27.523: INFO: Pod "alpine-nnp-false-5cb15b5e-db75-4399-bc3c-5fb7ddd1568c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036087184s Oct 14 23:12:27.523: INFO: Pod "alpine-nnp-false-5cb15b5e-db75-4399-bc3c-5fb7ddd1568c" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:27.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6520" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":942,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:27.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:27.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3397" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":63,"skipped":970,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:27.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Oct 14 23:12:28.058: INFO: created test-pod-1 Oct 14 23:12:28.099: INFO: created test-pod-2 Oct 14 23:12:28.109: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:28.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9550" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":64,"skipped":1013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:28.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 14 23:12:28.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5216' Oct 14 23:12:28.735: INFO: stderr: "" Oct 14 23:12:28.735: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 14 23:12:28.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-5216' Oct 14 23:12:28.841: INFO: stderr: "" Oct 14 23:12:28.841: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-14T23:12:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-14T23:12:28Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-14T23:12:28Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5216\",\n \"resourceVersion\": \"2948361\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5216/pods/e2e-test-httpd-pod\",\n \"uid\": \"9c2d14c1-09ce-414b-9c0b-8fbad267c7fa\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-9q8hz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-9q8hz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-9q8hz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T23:12:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T23:12:28Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T23:12:28Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-14T23:12:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.17\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-14T23:12:28Z\"\n }\n}\n" Oct 14 23:12:28.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-5216' Oct 14 23:12:29.184: INFO: stderr: "W1014 23:12:28.913208 787 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Oct 14 23:12:29.184: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Oct 14 23:12:29.187: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5216' Oct 14 23:12:32.342: INFO: stderr: "" Oct 14 23:12:32.342: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:32.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5216" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":65,"skipped":1055,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:32.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-6196f068-4b03-4629-80c2-f6c814b2bd8d STEP: Creating a pod to test consume secrets Oct 14 23:12:32.633: INFO: Waiting up to 5m0s for pod "pod-secrets-da56374e-6fe5-4c75-8988-8aa6cf39ed76" in namespace "secrets-7610" to be "Succeeded or Failed" Oct 14 23:12:32.872: INFO: Pod "pod-secrets-da56374e-6fe5-4c75-8988-8aa6cf39ed76": Phase="Pending", Reason="", readiness=false. Elapsed: 238.743499ms Oct 14 23:12:34.948: INFO: Pod "pod-secrets-da56374e-6fe5-4c75-8988-8aa6cf39ed76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.314860048s Oct 14 23:12:36.952: INFO: Pod "pod-secrets-da56374e-6fe5-4c75-8988-8aa6cf39ed76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.318702899s STEP: Saw pod success Oct 14 23:12:36.952: INFO: Pod "pod-secrets-da56374e-6fe5-4c75-8988-8aa6cf39ed76" satisfied condition "Succeeded or Failed" Oct 14 23:12:36.955: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-da56374e-6fe5-4c75-8988-8aa6cf39ed76 container secret-volume-test: STEP: delete the pod Oct 14 23:12:37.027: INFO: Waiting for pod pod-secrets-da56374e-6fe5-4c75-8988-8aa6cf39ed76 to disappear Oct 14 23:12:37.048: INFO: Pod pod-secrets-da56374e-6fe5-4c75-8988-8aa6cf39ed76 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:37.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7610" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":66,"skipped":1060,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:37.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:12:37.683: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:12:39.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313957, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313957, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313957, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738313957, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:12:42.763: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:42.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1115" for this suite. STEP: Destroying namespace "webhook-1115-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.020 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":67,"skipped":1062,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:43.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 14 23:12:43.142: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7666' Oct 14 23:12:43.429: INFO: stderr: "" Oct 14 23:12:43.429: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 14 23:12:43.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7666' Oct 14 23:12:43.581: INFO: stderr: "" Oct 14 23:12:43.581: INFO: stdout: "update-demo-nautilus-q9x76 update-demo-nautilus-tvrjz " Oct 14 23:12:43.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q9x76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7666' Oct 14 23:12:43.679: INFO: stderr: "" Oct 14 23:12:43.679: INFO: stdout: "" Oct 14 23:12:43.679: INFO: update-demo-nautilus-q9x76 is created but not running Oct 14 23:12:48.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7666' Oct 14 23:12:48.942: INFO: stderr: "" Oct 14 23:12:48.942: INFO: stdout: "update-demo-nautilus-q9x76 update-demo-nautilus-tvrjz " Oct 14 23:12:48.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q9x76 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7666' Oct 14 23:12:49.056: INFO: stderr: "" Oct 14 23:12:49.056: INFO: stdout: "true" Oct 14 23:12:49.057: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q9x76 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7666' Oct 14 23:12:49.147: INFO: stderr: "" Oct 14 23:12:49.147: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 23:12:49.147: INFO: validating pod update-demo-nautilus-q9x76 Oct 14 23:12:49.171: INFO: got data: { "image": "nautilus.jpg" } Oct 14 23:12:49.171: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 23:12:49.171: INFO: update-demo-nautilus-q9x76 is verified up and running Oct 14 23:12:49.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tvrjz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7666' Oct 14 23:12:49.279: INFO: stderr: "" Oct 14 23:12:49.280: INFO: stdout: "true" Oct 14 23:12:49.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tvrjz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7666' Oct 14 23:12:49.391: INFO: stderr: "" Oct 14 23:12:49.391: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 14 23:12:49.391: INFO: validating pod update-demo-nautilus-tvrjz Oct 14 23:12:49.395: INFO: got data: { "image": "nautilus.jpg" } Oct 14 23:12:49.395: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 14 23:12:49.395: INFO: update-demo-nautilus-tvrjz is verified up and running STEP: using delete to clean up resources Oct 14 23:12:49.395: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7666' Oct 14 23:12:49.497: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 23:12:49.497: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 14 23:12:49.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7666' Oct 14 23:12:49.607: INFO: stderr: "No resources found in kubectl-7666 namespace.\n" Oct 14 23:12:49.607: INFO: stdout: "" Oct 14 23:12:49.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7666 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 14 23:12:49.717: INFO: stderr: "" Oct 14 23:12:49.717: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:49.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7666" for this suite. • [SLOW TEST:6.624 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":68,"skipped":1065,"failed":0} SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:49.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Oct 14 23:12:50.527: INFO: created pod pod-service-account-defaultsa Oct 14 23:12:50.527: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 14 23:12:50.673: INFO: created pod pod-service-account-mountsa Oct 14 23:12:50.673: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 14 23:12:50.718: INFO: created pod pod-service-account-nomountsa Oct 14 23:12:50.718: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 14 23:12:50.752: INFO: created pod pod-service-account-defaultsa-mountspec Oct 14 23:12:50.752: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 14 23:12:50.852: INFO: created pod pod-service-account-mountsa-mountspec Oct 14 23:12:50.852: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 14 23:12:50.882: INFO: created pod pod-service-account-nomountsa-mountspec Oct 14 23:12:50.882: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 14 23:12:50.937: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 14 23:12:50.937: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 14 23:12:50.990: INFO: created pod pod-service-account-mountsa-nomountspec Oct 14 23:12:50.990: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 14 23:12:51.075: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 14 23:12:51.075: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:12:51.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7513" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":69,"skipped":1073,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:12:51.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Oct 14 23:12:51.331: INFO: Waiting up to 5m0s for pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b" in namespace "containers-8184" to be "Succeeded or Failed" Oct 14 23:12:51.338: INFO: Pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.317703ms Oct 14 23:12:53.342: INFO: Pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011033024s Oct 14 23:12:55.614: INFO: Pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282260878s Oct 14 23:12:57.638: INFO: Pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.306278366s Oct 14 23:12:59.733: INFO: Pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.401322753s Oct 14 23:13:01.738: INFO: Pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.406829673s Oct 14 23:13:03.750: INFO: Pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.41910485s STEP: Saw pod success Oct 14 23:13:03.751: INFO: Pod "client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b" satisfied condition "Succeeded or Failed" Oct 14 23:13:03.753: INFO: Trying to get logs from node leguer-worker pod client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b container test-container: STEP: delete the pod Oct 14 23:13:04.220: INFO: Waiting for pod client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b to disappear Oct 14 23:13:04.547: INFO: Pod client-containers-0656bcae-f796-477d-9fae-e8c3e91ec27b no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:13:04.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8184" for this suite. • [SLOW TEST:13.750 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":70,"skipped":1079,"failed":0} [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:13:04.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Oct 14 23:13:05.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config cluster-info' Oct 14 23:13:06.319: INFO: stderr: "" Oct 14 23:13:06.320: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43573\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:43573/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:13:06.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9036" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":71,"skipped":1079,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:13:06.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-410 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 14 23:13:07.568: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 14 23:13:08.367: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 23:13:10.667: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 23:13:12.597: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 14 23:13:14.372: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 23:13:16.372: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 23:13:18.371: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 23:13:20.371: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 23:13:22.371: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 23:13:24.377: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 14 23:13:26.656: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 14 23:13:26.728: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 14 23:13:28.826: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 14 23:13:30.799: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 14 23:13:32.732: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 14 23:13:40.869: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.171:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-410 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 23:13:40.870: INFO: >>> kubeConfig: /root/.kube/config I1014 23:13:40.895473 7 log.go:181] (0xc003946000) (0xc002b4a3c0) Create stream I1014 23:13:40.895502 7 log.go:181] (0xc003946000) (0xc002b4a3c0) Stream added, broadcasting: 1 I1014 23:13:40.897584 7 log.go:181] (0xc003946000) Reply frame received for 1 I1014 23:13:40.897612 7 log.go:181] (0xc003946000) (0xc002b4a460) Create stream I1014 23:13:40.897621 7 log.go:181] (0xc003946000) (0xc002b4a460) Stream added, broadcasting: 3 I1014 23:13:40.898316 7 log.go:181] (0xc003946000) Reply frame received for 3 I1014 23:13:40.898343 7 log.go:181] (0xc003946000) (0xc0044babe0) Create stream I1014 23:13:40.898354 7 log.go:181] (0xc003946000) (0xc0044babe0) Stream added, broadcasting: 5 I1014 23:13:40.899125 7 log.go:181] (0xc003946000) Reply frame received for 5 I1014 23:13:40.983589 7 log.go:181] (0xc003946000) Data frame received for 3 I1014 23:13:40.983630 7 log.go:181] (0xc002b4a460) (3) Data frame handling I1014 23:13:40.983641 7 log.go:181] (0xc002b4a460) (3) Data frame sent I1014 23:13:40.983653 7 log.go:181] (0xc003946000) Data frame received for 3 I1014 23:13:40.983663 7 log.go:181] (0xc002b4a460) (3) Data frame handling I1014 23:13:40.983675 7 log.go:181] (0xc003946000) Data frame received for 5 I1014 23:13:40.983684 7 log.go:181] (0xc0044babe0) (5) Data frame handling I1014 23:13:40.985783 7 log.go:181] (0xc003946000) Data frame received for 1 I1014 23:13:40.985820 7 log.go:181] (0xc002b4a3c0) (1) Data frame handling I1014 23:13:40.985833 7 log.go:181] (0xc002b4a3c0) (1) Data frame sent I1014 23:13:40.985849 7 log.go:181] (0xc003946000) (0xc002b4a3c0) Stream removed, broadcasting: 1 I1014 23:13:40.985868 7 log.go:181] (0xc003946000) Go away received I1014 23:13:40.986000 7 log.go:181] (0xc003946000) (0xc002b4a3c0) Stream removed, broadcasting: 1 I1014 23:13:40.986030 7 log.go:181] (0xc003946000) (0xc002b4a460) Stream removed, broadcasting: 3 I1014 23:13:40.986048 7 log.go:181] (0xc003946000) (0xc0044babe0) Stream removed, broadcasting: 5 Oct 14 23:13:40.986: INFO: Found all expected endpoints: [netserver-0] Oct 14 23:13:40.989: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.214:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-410 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 23:13:40.989: INFO: >>> kubeConfig: /root/.kube/config I1014 23:13:41.021155 7 log.go:181] (0xc0007b9080) (0xc00478a1e0) Create stream I1014 23:13:41.021193 7 log.go:181] (0xc0007b9080) (0xc00478a1e0) Stream added, broadcasting: 1 I1014 23:13:41.025158 7 log.go:181] (0xc0007b9080) Reply frame received for 1 I1014 23:13:41.025204 7 log.go:181] (0xc0007b9080) (0xc00421e3c0) Create stream I1014 23:13:41.025219 7 log.go:181] (0xc0007b9080) (0xc00421e3c0) Stream added, broadcasting: 3 I1014 23:13:41.028273 7 log.go:181] (0xc0007b9080) Reply frame received for 3 I1014 23:13:41.028301 7 log.go:181] (0xc0007b9080) (0xc00440a000) Create stream I1014 23:13:41.028310 7 log.go:181] (0xc0007b9080) (0xc00440a000) Stream added, broadcasting: 5 I1014 23:13:41.029389 7 log.go:181] (0xc0007b9080) Reply frame received for 5 I1014 23:13:41.108808 7 log.go:181] (0xc0007b9080) Data frame received for 5 I1014 23:13:41.108915 7 log.go:181] (0xc00440a000) (5) Data frame handling I1014 23:13:41.108940 7 log.go:181] (0xc0007b9080) Data frame received for 3 I1014 23:13:41.108946 7 log.go:181] (0xc00421e3c0) (3) Data frame handling I1014 23:13:41.108959 7 log.go:181] (0xc00421e3c0) (3) Data frame sent I1014 23:13:41.108971 7 log.go:181] (0xc0007b9080) Data frame received for 3 I1014 23:13:41.108983 7 log.go:181] (0xc00421e3c0) (3) Data frame handling I1014 23:13:41.110985 7 log.go:181] (0xc0007b9080) Data frame received for 1 I1014 23:13:41.111029 7 log.go:181] (0xc00478a1e0) (1) Data frame handling I1014 23:13:41.111056 7 log.go:181] (0xc00478a1e0) (1) Data frame sent I1014 23:13:41.111205 7 log.go:181] (0xc0007b9080) (0xc00478a1e0) Stream removed, broadcasting: 1 I1014 23:13:41.111247 7 log.go:181] (0xc0007b9080) Go away received I1014 23:13:41.111314 7 log.go:181] (0xc0007b9080) (0xc00478a1e0) Stream removed, broadcasting: 1 I1014 23:13:41.111332 7 log.go:181] (0xc0007b9080) (0xc00421e3c0) Stream removed, broadcasting: 3 I1014 23:13:41.111346 7 log.go:181] (0xc0007b9080) (0xc00440a000) Stream removed, broadcasting: 5 Oct 14 23:13:41.111: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:13:41.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-410" for this suite. • [SLOW TEST:34.464 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1088,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:13:41.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:13:41.235: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918" in namespace "downward-api-5807" to be "Succeeded or Failed" Oct 14 23:13:41.258: INFO: Pod "downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918": Phase="Pending", Reason="", readiness=false. Elapsed: 22.857447ms Oct 14 23:13:43.873: INFO: Pod "downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918": Phase="Pending", Reason="", readiness=false. Elapsed: 2.637514326s Oct 14 23:13:45.878: INFO: Pod "downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918": Phase="Running", Reason="", readiness=true. Elapsed: 4.642576717s Oct 14 23:13:48.010: INFO: Pod "downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.774553701s STEP: Saw pod success Oct 14 23:13:48.010: INFO: Pod "downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918" satisfied condition "Succeeded or Failed" Oct 14 23:13:48.013: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918 container client-container: STEP: delete the pod Oct 14 23:13:48.501: INFO: Waiting for pod downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918 to disappear Oct 14 23:13:48.525: INFO: Pod downwardapi-volume-97b0acaa-9c20-404a-9239-6d8f5e5d6918 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:13:48.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5807" for this suite. • [SLOW TEST:7.407 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":73,"skipped":1098,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:13:48.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-746 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-746 Oct 14 23:13:48.781: INFO: Found 0 stateful pods, waiting for 1 Oct 14 23:13:58.785: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 23:13:58.803: INFO: Deleting all statefulset in ns statefulset-746 Oct 14 23:13:58.829: INFO: Scaling statefulset ss to 0 Oct 14 23:14:08.951: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:14:08.954: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:14:08.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-746" for this suite. • [SLOW TEST:20.450 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":74,"skipped":1105,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:14:08.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:14:13.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4101" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":75,"skipped":1134,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:14:13.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 14 23:14:13.251: INFO: starting watch STEP: patching STEP: updating Oct 14 23:14:13.265: INFO: waiting for watch events with expected annotations Oct 14 23:14:13.265: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:14:13.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-6249" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":76,"skipped":1155,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:14:13.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:16:13.480: INFO: Deleting pod "var-expansion-6a72ad72-f9dd-4363-a714-859f799c82a8" in namespace "var-expansion-4929" Oct 14 23:16:13.484: INFO: Wait up to 5m0s for pod "var-expansion-6a72ad72-f9dd-4363-a714-859f799c82a8" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:16:17.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4929" for this suite. • [SLOW TEST:124.189 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":77,"skipped":1172,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:16:17.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:16:17.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1511" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":78,"skipped":1185,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:16:17.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:16:17.831: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"501fcd66-15de-457a-9222-7a20d87fadf0", Controller:(*bool)(0xc002724bc2), BlockOwnerDeletion:(*bool)(0xc002724bc3)}} Oct 14 23:16:17.852: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9598bb01-72e2-43dd-9e6e-63122b655f76", Controller:(*bool)(0xc0034c0d5a), BlockOwnerDeletion:(*bool)(0xc0034c0d5b)}} Oct 14 23:16:17.885: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"a5963e91-7f61-4c4e-b829-12df362a1e17", Controller:(*bool)(0xc002cdc5d2), BlockOwnerDeletion:(*bool)(0xc002cdc5d3)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:16:22.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5425" for this suite. • [SLOW TEST:5.276 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":79,"skipped":1190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:16:22.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-3148 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3148 to expose endpoints map[] Oct 14 23:16:23.053: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Oct 14 23:16:24.061: INFO: successfully validated that service endpoint-test2 in namespace services-3148 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3148 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3148 to expose endpoints map[pod1:[80]] Oct 14 23:16:28.117: INFO: successfully validated that service endpoint-test2 in namespace services-3148 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-3148 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3148 to expose endpoints map[pod1:[80] pod2:[80]] Oct 14 23:16:32.196: INFO: successfully validated that service endpoint-test2 in namespace services-3148 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-3148 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3148 to expose endpoints map[pod2:[80]] Oct 14 23:16:32.231: INFO: successfully validated that service endpoint-test2 in namespace services-3148 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-3148 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3148 to expose endpoints map[] Oct 14 23:16:33.286: INFO: successfully validated that service endpoint-test2 in namespace services-3148 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:16:33.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3148" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.370 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":80,"skipped":1221,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:16:33.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Oct 14 23:16:33.415: INFO: Waiting up to 5m0s for pod "var-expansion-92ecc5b7-703c-4a49-b68a-362c4f240ff7" in namespace "var-expansion-3999" to be "Succeeded or Failed" Oct 14 23:16:33.424: INFO: Pod "var-expansion-92ecc5b7-703c-4a49-b68a-362c4f240ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.011969ms Oct 14 23:16:35.428: INFO: Pod "var-expansion-92ecc5b7-703c-4a49-b68a-362c4f240ff7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012184393s Oct 14 23:16:37.433: INFO: Pod "var-expansion-92ecc5b7-703c-4a49-b68a-362c4f240ff7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017440528s STEP: Saw pod success Oct 14 23:16:37.433: INFO: Pod "var-expansion-92ecc5b7-703c-4a49-b68a-362c4f240ff7" satisfied condition "Succeeded or Failed" Oct 14 23:16:37.436: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-92ecc5b7-703c-4a49-b68a-362c4f240ff7 container dapi-container: STEP: delete the pod Oct 14 23:16:37.792: INFO: Waiting for pod var-expansion-92ecc5b7-703c-4a49-b68a-362c4f240ff7 to disappear Oct 14 23:16:37.801: INFO: Pod var-expansion-92ecc5b7-703c-4a49-b68a-362c4f240ff7 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:16:37.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3999" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":81,"skipped":1246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:16:37.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-589 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-589 to expose endpoints map[] Oct 14 23:16:37.939: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Oct 14 23:16:38.948: INFO: successfully validated that service multi-endpoint-test in namespace services-589 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-589 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-589 to expose endpoints map[pod1:[100]] Oct 14 23:16:43.042: INFO: successfully validated that service multi-endpoint-test in namespace services-589 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-589 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-589 to expose endpoints map[pod1:[100] pod2:[101]] Oct 14 23:16:47.131: INFO: successfully validated that service multi-endpoint-test in namespace services-589 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-589 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-589 to expose endpoints map[pod2:[101]] Oct 14 23:16:47.192: INFO: successfully validated that service multi-endpoint-test in namespace services-589 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-589 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-589 to expose endpoints map[] Oct 14 23:16:48.341: INFO: successfully validated that service multi-endpoint-test in namespace services-589 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:16:48.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-589" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:10.575 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":82,"skipped":1325,"failed":0} SSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:16:48.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-de4456ef-0bb3-4814-b817-ee49fe475334 STEP: Creating secret with name secret-projected-all-test-volume-325b29aa-ba87-4d05-9437-3900c158dbb4 STEP: Creating a pod to test Check all projections for projected volume plugin Oct 14 23:16:48.710: INFO: Waiting up to 5m0s for pod "projected-volume-43c1b698-97f8-4c26-bb51-ff8443f379c3" in namespace "projected-6972" to be "Succeeded or Failed" Oct 14 23:16:48.721: INFO: Pod "projected-volume-43c1b698-97f8-4c26-bb51-ff8443f379c3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.787499ms Oct 14 23:16:50.725: INFO: Pod "projected-volume-43c1b698-97f8-4c26-bb51-ff8443f379c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014648428s Oct 14 23:16:52.729: INFO: Pod "projected-volume-43c1b698-97f8-4c26-bb51-ff8443f379c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018547325s STEP: Saw pod success Oct 14 23:16:52.729: INFO: Pod "projected-volume-43c1b698-97f8-4c26-bb51-ff8443f379c3" satisfied condition "Succeeded or Failed" Oct 14 23:16:52.733: INFO: Trying to get logs from node leguer-worker2 pod projected-volume-43c1b698-97f8-4c26-bb51-ff8443f379c3 container projected-all-volume-test: STEP: delete the pod Oct 14 23:16:52.767: INFO: Waiting for pod projected-volume-43c1b698-97f8-4c26-bb51-ff8443f379c3 to disappear Oct 14 23:16:52.775: INFO: Pod projected-volume-43c1b698-97f8-4c26-bb51-ff8443f379c3 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:16:52.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6972" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1330,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:16:52.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 14 23:16:52.866: INFO: Created pod &Pod{ObjectMeta:{dns-9251 dns-9251 /api/v1/namespaces/dns-9251/pods/dns-9251 419ab53f-0aa4-4026-ab34-2c920cfac2b6 2949922 0 2020-10-14 23:16:52 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-10-14 23:16:52 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-q5t7s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-q5t7s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-q5t7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:16:52.871: INFO: The status of Pod dns-9251 is Pending, waiting for it to be Running (with Ready = true) Oct 14 23:16:54.878: INFO: The status of Pod dns-9251 is Pending, waiting for it to be Running (with Ready = true) Oct 14 23:16:56.876: INFO: The status of Pod dns-9251 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 14 23:16:56.876: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-9251 PodName:dns-9251 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 23:16:56.876: INFO: >>> kubeConfig: /root/.kube/config I1014 23:16:56.911499 7 log.go:181] (0xc003946210) (0xc000e00fa0) Create stream I1014 23:16:56.911538 7 log.go:181] (0xc003946210) (0xc000e00fa0) Stream added, broadcasting: 1 I1014 23:16:56.914689 7 log.go:181] (0xc003946210) Reply frame received for 1 I1014 23:16:56.914747 7 log.go:181] (0xc003946210) (0xc0035b2780) Create stream I1014 23:16:56.914776 7 log.go:181] (0xc003946210) (0xc0035b2780) Stream added, broadcasting: 3 I1014 23:16:56.916004 7 log.go:181] (0xc003946210) Reply frame received for 3 I1014 23:16:56.916075 7 log.go:181] (0xc003946210) (0xc003d65720) Create stream I1014 23:16:56.916093 7 log.go:181] (0xc003946210) (0xc003d65720) Stream added, broadcasting: 5 I1014 23:16:56.917229 7 log.go:181] (0xc003946210) Reply frame received for 5 I1014 23:16:57.012822 7 log.go:181] (0xc003946210) Data frame received for 3 I1014 23:16:57.012955 7 log.go:181] (0xc0035b2780) (3) Data frame handling I1014 23:16:57.012978 7 log.go:181] (0xc0035b2780) (3) Data frame sent I1014 23:16:57.014602 7 log.go:181] (0xc003946210) Data frame received for 3 I1014 23:16:57.014637 7 log.go:181] (0xc0035b2780) (3) Data frame handling I1014 23:16:57.014688 7 log.go:181] (0xc003946210) Data frame received for 5 I1014 23:16:57.014725 7 log.go:181] (0xc003d65720) (5) Data frame handling I1014 23:16:57.017011 7 log.go:181] (0xc003946210) Data frame received for 1 I1014 23:16:57.017046 7 log.go:181] (0xc000e00fa0) (1) Data frame handling I1014 23:16:57.017067 7 log.go:181] (0xc000e00fa0) (1) Data frame sent I1014 23:16:57.017083 7 log.go:181] (0xc003946210) (0xc000e00fa0) Stream removed, broadcasting: 1 I1014 23:16:57.017105 7 log.go:181] (0xc003946210) Go away received I1014 23:16:57.017195 7 log.go:181] (0xc003946210) (0xc000e00fa0) Stream removed, broadcasting: 1 I1014 23:16:57.017223 7 log.go:181] (0xc003946210) (0xc0035b2780) Stream removed, broadcasting: 3 I1014 23:16:57.017242 7 log.go:181] (0xc003946210) (0xc003d65720) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Oct 14 23:16:57.017: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-9251 PodName:dns-9251 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 23:16:57.017: INFO: >>> kubeConfig: /root/.kube/config I1014 23:16:57.046184 7 log.go:181] (0xc003436e70) (0xc0035b2aa0) Create stream I1014 23:16:57.046209 7 log.go:181] (0xc003436e70) (0xc0035b2aa0) Stream added, broadcasting: 1 I1014 23:16:57.048552 7 log.go:181] (0xc003436e70) Reply frame received for 1 I1014 23:16:57.048600 7 log.go:181] (0xc003436e70) (0xc000e01040) Create stream I1014 23:16:57.048618 7 log.go:181] (0xc003436e70) (0xc000e01040) Stream added, broadcasting: 3 I1014 23:16:57.049770 7 log.go:181] (0xc003436e70) Reply frame received for 3 I1014 23:16:57.049819 7 log.go:181] (0xc003436e70) (0xc003cd0500) Create stream I1014 23:16:57.049837 7 log.go:181] (0xc003436e70) (0xc003cd0500) Stream added, broadcasting: 5 I1014 23:16:57.050614 7 log.go:181] (0xc003436e70) Reply frame received for 5 I1014 23:16:57.136375 7 log.go:181] (0xc003436e70) Data frame received for 3 I1014 23:16:57.136417 7 log.go:181] (0xc000e01040) (3) Data frame handling I1014 23:16:57.136458 7 log.go:181] (0xc000e01040) (3) Data frame sent I1014 23:16:57.139160 7 log.go:181] (0xc003436e70) Data frame received for 5 I1014 23:16:57.139186 7 log.go:181] (0xc003cd0500) (5) Data frame handling I1014 23:16:57.139208 7 log.go:181] (0xc003436e70) Data frame received for 3 I1014 23:16:57.139218 7 log.go:181] (0xc000e01040) (3) Data frame handling I1014 23:16:57.140555 7 log.go:181] (0xc003436e70) Data frame received for 1 I1014 23:16:57.140566 7 log.go:181] (0xc0035b2aa0) (1) Data frame handling I1014 23:16:57.140574 7 log.go:181] (0xc0035b2aa0) (1) Data frame sent I1014 23:16:57.140795 7 log.go:181] (0xc003436e70) (0xc0035b2aa0) Stream removed, broadcasting: 1 I1014 23:16:57.140821 7 log.go:181] (0xc003436e70) Go away received I1014 23:16:57.141049 7 log.go:181] (0xc003436e70) (0xc0035b2aa0) Stream removed, broadcasting: 1 I1014 23:16:57.141087 7 log.go:181] (0xc003436e70) (0xc000e01040) Stream removed, broadcasting: 3 I1014 23:16:57.141100 7 log.go:181] (0xc003436e70) (0xc003cd0500) Stream removed, broadcasting: 5 Oct 14 23:16:57.141: INFO: Deleting pod dns-9251... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:16:57.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9251" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":84,"skipped":1339,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:16:57.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:16:57.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 14 23:16:58.055: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T23:16:58Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T23:16:58Z]] name:name1 resourceVersion:2949974 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:726fc2f7-f897-4b85-820c-2c011d911d82] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 14 23:17:08.061: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T23:17:08Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T23:17:08Z]] name:name2 resourceVersion:2950026 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ee9913e5-6b45-4aaa-8f57-561ea0eb23ee] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 14 23:17:18.069: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T23:16:58Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T23:17:18Z]] name:name1 resourceVersion:2950059 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:726fc2f7-f897-4b85-820c-2c011d911d82] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 14 23:17:28.077: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T23:17:08Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T23:17:28Z]] name:name2 resourceVersion:2950089 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ee9913e5-6b45-4aaa-8f57-561ea0eb23ee] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 14 23:17:38.087: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T23:16:58Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T23:17:18Z]] name:name1 resourceVersion:2950119 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:726fc2f7-f897-4b85-820c-2c011d911d82] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 14 23:17:48.098: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-14T23:17:08Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-14T23:17:28Z]] name:name2 resourceVersion:2950149 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ee9913e5-6b45-4aaa-8f57-561ea0eb23ee] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:17:58.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-4485" for this suite. • [SLOW TEST:61.434 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":85,"skipped":1357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:17:58.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:17:58.671: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 14 23:18:00.703: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8386 create -f -' Oct 14 23:18:04.229: INFO: stderr: "" Oct 14 23:18:04.229: INFO: stdout: "e2e-test-crd-publish-openapi-7345-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 14 23:18:04.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8386 delete e2e-test-crd-publish-openapi-7345-crds test-cr' Oct 14 23:18:04.354: INFO: stderr: "" Oct 14 23:18:04.354: INFO: stdout: "e2e-test-crd-publish-openapi-7345-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 14 23:18:04.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8386 apply -f -' Oct 14 23:18:04.622: INFO: stderr: "" Oct 14 23:18:04.622: INFO: stdout: "e2e-test-crd-publish-openapi-7345-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 14 23:18:04.622: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8386 delete e2e-test-crd-publish-openapi-7345-crds test-cr' Oct 14 23:18:04.729: INFO: stderr: "" Oct 14 23:18:04.729: INFO: stdout: "e2e-test-crd-publish-openapi-7345-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 14 23:18:04.729: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7345-crds' Oct 14 23:18:05.025: INFO: stderr: "" Oct 14 23:18:05.025: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7345-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:18:07.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8386" for this suite. • [SLOW TEST:9.380 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":86,"skipped":1419,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:18:08.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Oct 14 23:18:12.628: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2351 pod-service-account-caab1a6c-ded3-4f31-a065-506378f17fe1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 14 23:18:12.854: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2351 pod-service-account-caab1a6c-ded3-4f31-a065-506378f17fe1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 14 23:18:13.071: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2351 pod-service-account-caab1a6c-ded3-4f31-a065-506378f17fe1 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:18:13.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2351" for this suite. • [SLOW TEST:5.377 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":87,"skipped":1421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:18:13.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 14 23:18:13.460: INFO: Waiting up to 5m0s for pod "pod-75c0e647-8658-4ed3-9e5f-8700c8290399" in namespace "emptydir-7767" to be "Succeeded or Failed" Oct 14 23:18:13.464: INFO: Pod "pod-75c0e647-8658-4ed3-9e5f-8700c8290399": Phase="Pending", Reason="", readiness=false. Elapsed: 3.463283ms Oct 14 23:18:15.469: INFO: Pod "pod-75c0e647-8658-4ed3-9e5f-8700c8290399": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009197649s Oct 14 23:18:17.481: INFO: Pod "pod-75c0e647-8658-4ed3-9e5f-8700c8290399": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020987498s STEP: Saw pod success Oct 14 23:18:17.481: INFO: Pod "pod-75c0e647-8658-4ed3-9e5f-8700c8290399" satisfied condition "Succeeded or Failed" Oct 14 23:18:17.484: INFO: Trying to get logs from node leguer-worker2 pod pod-75c0e647-8658-4ed3-9e5f-8700c8290399 container test-container: STEP: delete the pod Oct 14 23:18:17.509: INFO: Waiting for pod pod-75c0e647-8658-4ed3-9e5f-8700c8290399 to disappear Oct 14 23:18:17.517: INFO: Pod pod-75c0e647-8658-4ed3-9e5f-8700c8290399 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:18:17.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7767" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":88,"skipped":1444,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:18:17.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:18:18.187: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:18:20.197: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314298, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314298, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314298, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314298, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:18:23.225: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:18:35.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3122" for this suite. STEP: Destroying namespace "webhook-3122-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.142 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":89,"skipped":1445,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:18:35.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 14 23:18:35.785: INFO: Waiting up to 5m0s for pod "pod-477aed31-fcd4-42e1-9f1f-750930783108" in namespace "emptydir-9053" to be "Succeeded or Failed" Oct 14 23:18:35.790: INFO: Pod "pod-477aed31-fcd4-42e1-9f1f-750930783108": Phase="Pending", Reason="", readiness=false. Elapsed: 4.337712ms Oct 14 23:18:37.794: INFO: Pod "pod-477aed31-fcd4-42e1-9f1f-750930783108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008666838s Oct 14 23:18:39.798: INFO: Pod "pod-477aed31-fcd4-42e1-9f1f-750930783108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012484282s STEP: Saw pod success Oct 14 23:18:39.798: INFO: Pod "pod-477aed31-fcd4-42e1-9f1f-750930783108" satisfied condition "Succeeded or Failed" Oct 14 23:18:39.800: INFO: Trying to get logs from node leguer-worker2 pod pod-477aed31-fcd4-42e1-9f1f-750930783108 container test-container: STEP: delete the pod Oct 14 23:18:39.840: INFO: Waiting for pod pod-477aed31-fcd4-42e1-9f1f-750930783108 to disappear Oct 14 23:18:39.894: INFO: Pod pod-477aed31-fcd4-42e1-9f1f-750930783108 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:18:39.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9053" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":90,"skipped":1449,"failed":0} ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:18:39.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:19:13.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7065" for this suite. • [SLOW TEST:33.868 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":91,"skipped":1449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:19:13.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Oct 14 23:19:14.058: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Oct 14 23:19:15.044: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Oct 14 23:19:17.459: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314355, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314355, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314355, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314355, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 23:19:19.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314355, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314355, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314355, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314355, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 23:19:22.193: INFO: Waited 721.447913ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:19:22.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7" for this suite. • [SLOW TEST:9.065 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":92,"skipped":1497,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:19:22.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 14 23:19:23.157: INFO: Waiting up to 5m0s for pod "pod-0f6a77b0-c46d-4622-9464-7e10c7c415ab" in namespace "emptydir-3762" to be "Succeeded or Failed" Oct 14 23:19:23.204: INFO: Pod "pod-0f6a77b0-c46d-4622-9464-7e10c7c415ab": Phase="Pending", Reason="", readiness=false. Elapsed: 46.64251ms Oct 14 23:19:25.207: INFO: Pod "pod-0f6a77b0-c46d-4622-9464-7e10c7c415ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050219022s Oct 14 23:19:27.212: INFO: Pod "pod-0f6a77b0-c46d-4622-9464-7e10c7c415ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05491544s STEP: Saw pod success Oct 14 23:19:27.212: INFO: Pod "pod-0f6a77b0-c46d-4622-9464-7e10c7c415ab" satisfied condition "Succeeded or Failed" Oct 14 23:19:27.215: INFO: Trying to get logs from node leguer-worker2 pod pod-0f6a77b0-c46d-4622-9464-7e10c7c415ab container test-container: STEP: delete the pod Oct 14 23:19:27.235: INFO: Waiting for pod pod-0f6a77b0-c46d-4622-9464-7e10c7c415ab to disappear Oct 14 23:19:27.239: INFO: Pod pod-0f6a77b0-c46d-4622-9464-7e10c7c415ab no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:19:27.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3762" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1507,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:19:27.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:19:27.341: INFO: Waiting up to 5m0s for pod "busybox-user-65534-2ef86ca7-a838-4130-a622-95df565fb5af" in namespace "security-context-test-4938" to be "Succeeded or Failed" Oct 14 23:19:27.367: INFO: Pod "busybox-user-65534-2ef86ca7-a838-4130-a622-95df565fb5af": Phase="Pending", Reason="", readiness=false. Elapsed: 26.241155ms Oct 14 23:19:29.477: INFO: Pod "busybox-user-65534-2ef86ca7-a838-4130-a622-95df565fb5af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136125123s Oct 14 23:19:31.481: INFO: Pod "busybox-user-65534-2ef86ca7-a838-4130-a622-95df565fb5af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.139710064s Oct 14 23:19:31.481: INFO: Pod "busybox-user-65534-2ef86ca7-a838-4130-a622-95df565fb5af" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:19:31.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4938" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":94,"skipped":1522,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:19:31.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 14 23:19:39.660: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 23:19:39.671: INFO: Pod pod-with-prestop-exec-hook still exists Oct 14 23:19:41.671: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 23:19:41.735: INFO: Pod pod-with-prestop-exec-hook still exists Oct 14 23:19:43.672: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 23:19:43.677: INFO: Pod pod-with-prestop-exec-hook still exists Oct 14 23:19:45.672: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 23:19:45.677: INFO: Pod pod-with-prestop-exec-hook still exists Oct 14 23:19:47.671: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 23:19:47.676: INFO: Pod pod-with-prestop-exec-hook still exists Oct 14 23:19:49.671: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 23:19:49.677: INFO: Pod pod-with-prestop-exec-hook still exists Oct 14 23:19:51.671: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 14 23:19:51.676: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:19:51.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7154" for this suite. • [SLOW TEST:20.220 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":95,"skipped":1524,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:19:51.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:19:51.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2521" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":96,"skipped":1527,"failed":0} SSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:19:51.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:19:52.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2364" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":97,"skipped":1532,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:19:52.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:19:56.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-492" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":98,"skipped":1538,"failed":0} S ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:19:56.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:20:12.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1429" for this suite. • [SLOW TEST:16.284 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":99,"skipped":1539,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:20:12.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-69e469e3-2fde-40e3-b8bc-3a24fb024f99 STEP: Creating a pod to test consume configMaps Oct 14 23:20:12.515: INFO: Waiting up to 5m0s for pod "pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a" in namespace "configmap-9982" to be "Succeeded or Failed" Oct 14 23:20:12.517: INFO: Pod "pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27131ms Oct 14 23:20:14.523: INFO: Pod "pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007430428s Oct 14 23:20:16.528: INFO: Pod "pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a": Phase="Running", Reason="", readiness=true. Elapsed: 4.012447823s Oct 14 23:20:18.537: INFO: Pod "pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02191651s STEP: Saw pod success Oct 14 23:20:18.537: INFO: Pod "pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a" satisfied condition "Succeeded or Failed" Oct 14 23:20:18.539: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a container configmap-volume-test: STEP: delete the pod Oct 14 23:20:18.568: INFO: Waiting for pod pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a to disappear Oct 14 23:20:18.577: INFO: Pod pod-configmaps-540b156d-48c8-4348-8b52-f296f011ef0a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:20:18.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9982" for this suite. • [SLOW TEST:6.197 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":100,"skipped":1549,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:20:18.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 14 23:20:22.721: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:20:22.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3754" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":101,"skipped":1556,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:20:22.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Oct 14 23:20:22.864: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-3314 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 14 23:20:22.980: INFO: stderr: "" Oct 14 23:20:22.980: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Oct 14 23:20:22.980: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 14 23:20:22.980: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3314" to be "running and ready, or succeeded" Oct 14 23:20:22.991: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.130703ms Oct 14 23:20:24.994: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01394281s Oct 14 23:20:27.004: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.023560055s Oct 14 23:20:27.004: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 14 23:20:27.004: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 14 23:20:27.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3314' Oct 14 23:20:27.125: INFO: stderr: "" Oct 14 23:20:27.125: INFO: stdout: "I1014 23:20:25.577748 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/958 413\nI1014 23:20:25.777913 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/tl8 336\nI1014 23:20:25.977878 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/wcw 302\nI1014 23:20:26.177933 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/n2v 341\nI1014 23:20:26.377932 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/bwc 517\nI1014 23:20:26.577946 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/jzf 587\nI1014 23:20:26.777907 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/sm6 336\nI1014 23:20:26.977919 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/m56 460\n" STEP: limiting log lines Oct 14 23:20:27.125: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3314 --tail=1' Oct 14 23:20:27.255: INFO: stderr: "" Oct 14 23:20:27.255: INFO: stdout: "I1014 23:20:27.177924 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/8bkl 371\n" Oct 14 23:20:27.255: INFO: got output "I1014 23:20:27.177924 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/8bkl 371\n" STEP: limiting log bytes Oct 14 23:20:27.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3314 --limit-bytes=1' Oct 14 23:20:27.361: INFO: stderr: "" Oct 14 23:20:27.361: INFO: stdout: "I" Oct 14 23:20:27.361: INFO: got output "I" STEP: exposing timestamps Oct 14 23:20:27.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3314 --tail=1 --timestamps' Oct 14 23:20:27.483: INFO: stderr: "" Oct 14 23:20:27.483: INFO: stdout: "2020-10-14T23:20:27.378083733Z I1014 23:20:27.377901 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/2cts 591\n" Oct 14 23:20:27.483: INFO: got output "2020-10-14T23:20:27.378083733Z I1014 23:20:27.377901 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/2cts 591\n" STEP: restricting to a time range Oct 14 23:20:29.984: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3314 --since=1s' Oct 14 23:20:30.098: INFO: stderr: "" Oct 14 23:20:30.098: INFO: stdout: "I1014 23:20:29.177923 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/gslg 507\nI1014 23:20:29.377862 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/zrk4 360\nI1014 23:20:29.577861 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/4twn 243\nI1014 23:20:29.777895 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/df8f 338\nI1014 23:20:29.977930 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/vbc 595\n" Oct 14 23:20:30.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-3314 --since=24h' Oct 14 23:20:30.215: INFO: stderr: "" Oct 14 23:20:30.215: INFO: stdout: "I1014 23:20:25.577748 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/958 413\nI1014 23:20:25.777913 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/tl8 336\nI1014 23:20:25.977878 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/wcw 302\nI1014 23:20:26.177933 1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/n2v 341\nI1014 23:20:26.377932 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/bwc 517\nI1014 23:20:26.577946 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/jzf 587\nI1014 23:20:26.777907 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/sm6 336\nI1014 23:20:26.977919 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/m56 460\nI1014 23:20:27.177924 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/8bkl 371\nI1014 23:20:27.377901 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/2cts 591\nI1014 23:20:27.577959 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/6ls 427\nI1014 23:20:27.777890 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/g5c 568\nI1014 23:20:27.977931 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/t2r7 279\nI1014 23:20:28.177878 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/p57 512\nI1014 23:20:28.377933 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/687 403\nI1014 23:20:28.578008 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/jqcn 220\nI1014 23:20:28.777910 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/8sc 516\nI1014 23:20:28.977934 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/kube-system/pods/nv6 226\nI1014 23:20:29.177923 1 logs_generator.go:76] 18 GET /api/v1/namespaces/ns/pods/gslg 507\nI1014 23:20:29.377862 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/zrk4 360\nI1014 23:20:29.577861 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/4twn 243\nI1014 23:20:29.777895 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/df8f 338\nI1014 23:20:29.977930 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/vbc 595\nI1014 23:20:30.177857 1 logs_generator.go:76] 23 GET /api/v1/namespaces/default/pods/mzzd 560\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Oct 14 23:20:30.216: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-3314' Oct 14 23:20:32.892: INFO: stderr: "" Oct 14 23:20:32.892: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:20:32.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3314" for this suite. • [SLOW TEST:10.079 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":102,"skipped":1557,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:20:32.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:20:33.681: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:20:35.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314433, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314433, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314433, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738314433, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:20:38.736: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:20:38.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-533" for this suite. STEP: Destroying namespace "webhook-533-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.054 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":103,"skipped":1557,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:20:38.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:20:39.018: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 14 23:20:39.028: INFO: Number of nodes with available pods: 0 Oct 14 23:20:39.028: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 14 23:20:39.100: INFO: Number of nodes with available pods: 0 Oct 14 23:20:39.100: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:40.130: INFO: Number of nodes with available pods: 0 Oct 14 23:20:40.130: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:41.148: INFO: Number of nodes with available pods: 0 Oct 14 23:20:41.148: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:42.105: INFO: Number of nodes with available pods: 1 Oct 14 23:20:42.105: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 14 23:20:42.144: INFO: Number of nodes with available pods: 1 Oct 14 23:20:42.144: INFO: Number of running nodes: 0, number of available pods: 1 Oct 14 23:20:43.148: INFO: Number of nodes with available pods: 0 Oct 14 23:20:43.148: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 14 23:20:43.165: INFO: Number of nodes with available pods: 0 Oct 14 23:20:43.165: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:44.329: INFO: Number of nodes with available pods: 0 Oct 14 23:20:44.329: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:45.170: INFO: Number of nodes with available pods: 0 Oct 14 23:20:45.170: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:46.170: INFO: Number of nodes with available pods: 0 Oct 14 23:20:46.170: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:47.168: INFO: Number of nodes with available pods: 0 Oct 14 23:20:47.168: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:48.171: INFO: Number of nodes with available pods: 0 Oct 14 23:20:48.171: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:49.169: INFO: Number of nodes with available pods: 0 Oct 14 23:20:49.169: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:50.170: INFO: Number of nodes with available pods: 0 Oct 14 23:20:50.170: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:51.395: INFO: Number of nodes with available pods: 0 Oct 14 23:20:51.395: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:52.191: INFO: Number of nodes with available pods: 0 Oct 14 23:20:52.191: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:20:53.169: INFO: Number of nodes with available pods: 1 Oct 14 23:20:53.169: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7213, will wait for the garbage collector to delete the pods Oct 14 23:20:53.236: INFO: Deleting DaemonSet.extensions daemon-set took: 6.807325ms Oct 14 23:20:53.636: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.194249ms Oct 14 23:20:59.540: INFO: Number of nodes with available pods: 0 Oct 14 23:20:59.540: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 23:20:59.543: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7213/daemonsets","resourceVersion":"2951401"},"items":null} Oct 14 23:20:59.545: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7213/pods","resourceVersion":"2951401"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:20:59.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7213" for this suite. • [SLOW TEST:20.653 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":104,"skipped":1567,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:20:59.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4020 STEP: creating service affinity-clusterip in namespace services-4020 STEP: creating replication controller affinity-clusterip in namespace services-4020 I1014 23:20:59.731355 7 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-4020, replica count: 3 I1014 23:21:02.781859 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:21:05.782064 7 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 23:21:05.787: INFO: Creating new exec pod Oct 14 23:21:10.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-4020 execpod-affinityww8kd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Oct 14 23:21:11.050: INFO: stderr: "I1014 23:21:10.962525 1331 log.go:181] (0xc00101cfd0) (0xc00048fd60) Create stream\nI1014 23:21:10.962573 1331 log.go:181] (0xc00101cfd0) (0xc00048fd60) Stream added, broadcasting: 1\nI1014 23:21:10.968116 1331 log.go:181] (0xc00101cfd0) Reply frame received for 1\nI1014 23:21:10.968164 1331 log.go:181] (0xc00101cfd0) (0xc00048e500) Create stream\nI1014 23:21:10.968183 1331 log.go:181] (0xc00101cfd0) (0xc00048e500) Stream added, broadcasting: 3\nI1014 23:21:10.969272 1331 log.go:181] (0xc00101cfd0) Reply frame received for 3\nI1014 23:21:10.969334 1331 log.go:181] (0xc00101cfd0) (0xc00048edc0) Create stream\nI1014 23:21:10.969350 1331 log.go:181] (0xc00101cfd0) (0xc00048edc0) Stream added, broadcasting: 5\nI1014 23:21:10.970548 1331 log.go:181] (0xc00101cfd0) Reply frame received for 5\nI1014 23:21:11.043750 1331 log.go:181] (0xc00101cfd0) Data frame received for 3\nI1014 23:21:11.043772 1331 log.go:181] (0xc00048e500) (3) Data frame handling\nI1014 23:21:11.043842 1331 log.go:181] (0xc00101cfd0) Data frame received for 5\nI1014 23:21:11.043878 1331 log.go:181] (0xc00048edc0) (5) Data frame handling\nI1014 23:21:11.043910 1331 log.go:181] (0xc00048edc0) (5) Data frame sent\nI1014 23:21:11.043935 1331 log.go:181] (0xc00101cfd0) Data frame received for 5\nI1014 23:21:11.043952 1331 log.go:181] (0xc00048edc0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI1014 23:21:11.045506 1331 log.go:181] (0xc00101cfd0) Data frame received for 1\nI1014 23:21:11.045540 1331 log.go:181] (0xc00048fd60) (1) Data frame handling\nI1014 23:21:11.045564 1331 log.go:181] (0xc00048fd60) (1) Data frame sent\nI1014 23:21:11.045610 1331 log.go:181] (0xc00101cfd0) (0xc00048fd60) Stream removed, broadcasting: 1\nI1014 23:21:11.045707 1331 log.go:181] (0xc00101cfd0) Go away received\nI1014 23:21:11.045889 1331 log.go:181] (0xc00101cfd0) (0xc00048fd60) Stream removed, broadcasting: 1\nI1014 23:21:11.045904 1331 log.go:181] (0xc00101cfd0) (0xc00048e500) Stream removed, broadcasting: 3\nI1014 23:21:11.045909 1331 log.go:181] (0xc00101cfd0) (0xc00048edc0) Stream removed, broadcasting: 5\n" Oct 14 23:21:11.050: INFO: stdout: "" Oct 14 23:21:11.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-4020 execpod-affinityww8kd -- /bin/sh -x -c nc -zv -t -w 2 10.102.119.31 80' Oct 14 23:21:11.261: INFO: stderr: "I1014 23:21:11.184120 1349 log.go:181] (0xc00002c000) (0xc000c56280) Create stream\nI1014 23:21:11.184175 1349 log.go:181] (0xc00002c000) (0xc000c56280) Stream added, broadcasting: 1\nI1014 23:21:11.186253 1349 log.go:181] (0xc00002c000) Reply frame received for 1\nI1014 23:21:11.186290 1349 log.go:181] (0xc00002c000) (0xc000f86000) Create stream\nI1014 23:21:11.186309 1349 log.go:181] (0xc00002c000) (0xc000f86000) Stream added, broadcasting: 3\nI1014 23:21:11.187363 1349 log.go:181] (0xc00002c000) Reply frame received for 3\nI1014 23:21:11.187401 1349 log.go:181] (0xc00002c000) (0xc000c56320) Create stream\nI1014 23:21:11.187412 1349 log.go:181] (0xc00002c000) (0xc000c56320) Stream added, broadcasting: 5\nI1014 23:21:11.188349 1349 log.go:181] (0xc00002c000) Reply frame received for 5\nI1014 23:21:11.253343 1349 log.go:181] (0xc00002c000) Data frame received for 5\nI1014 23:21:11.253393 1349 log.go:181] (0xc000c56320) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.119.31 80\nConnection to 10.102.119.31 80 port [tcp/http] succeeded!\nI1014 23:21:11.253440 1349 log.go:181] (0xc00002c000) Data frame received for 3\nI1014 23:21:11.253515 1349 log.go:181] (0xc000f86000) (3) Data frame handling\nI1014 23:21:11.253579 1349 log.go:181] (0xc000c56320) (5) Data frame sent\nI1014 23:21:11.253619 1349 log.go:181] (0xc00002c000) Data frame received for 5\nI1014 23:21:11.253671 1349 log.go:181] (0xc000c56320) (5) Data frame handling\nI1014 23:21:11.256463 1349 log.go:181] (0xc00002c000) Data frame received for 1\nI1014 23:21:11.256481 1349 log.go:181] (0xc000c56280) (1) Data frame handling\nI1014 23:21:11.256490 1349 log.go:181] (0xc000c56280) (1) Data frame sent\nI1014 23:21:11.256497 1349 log.go:181] (0xc00002c000) (0xc000c56280) Stream removed, broadcasting: 1\nI1014 23:21:11.256593 1349 log.go:181] (0xc00002c000) Go away received\nI1014 23:21:11.256797 1349 log.go:181] (0xc00002c000) (0xc000c56280) Stream removed, broadcasting: 1\nI1014 23:21:11.256814 1349 log.go:181] (0xc00002c000) (0xc000f86000) Stream removed, broadcasting: 3\nI1014 23:21:11.256819 1349 log.go:181] (0xc00002c000) (0xc000c56320) Stream removed, broadcasting: 5\n" Oct 14 23:21:11.261: INFO: stdout: "" Oct 14 23:21:11.261: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-4020 execpod-affinityww8kd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.102.119.31:80/ ; done' Oct 14 23:21:11.558: INFO: stderr: "I1014 23:21:11.391363 1367 log.go:181] (0xc00018c370) (0xc000d9a280) Create stream\nI1014 23:21:11.391424 1367 log.go:181] (0xc00018c370) (0xc000d9a280) Stream added, broadcasting: 1\nI1014 23:21:11.393497 1367 log.go:181] (0xc00018c370) Reply frame received for 1\nI1014 23:21:11.393552 1367 log.go:181] (0xc00018c370) (0xc000d9a320) Create stream\nI1014 23:21:11.393565 1367 log.go:181] (0xc00018c370) (0xc000d9a320) Stream added, broadcasting: 3\nI1014 23:21:11.394886 1367 log.go:181] (0xc00018c370) Reply frame received for 3\nI1014 23:21:11.394916 1367 log.go:181] (0xc00018c370) (0xc000bca500) Create stream\nI1014 23:21:11.394926 1367 log.go:181] (0xc00018c370) (0xc000bca500) Stream added, broadcasting: 5\nI1014 23:21:11.395911 1367 log.go:181] (0xc00018c370) Reply frame received for 5\nI1014 23:21:11.469372 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.469398 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.469406 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.469418 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.469422 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.469428 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.474075 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.474100 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.474121 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.474656 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.474675 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.474682 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.474689 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.474694 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.474701 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.478316 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.478347 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.478376 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.478678 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.478698 1367 log.go:181] (0xc000bca500) (5) Data frame handling\n+ echo\nI1014 23:21:11.478713 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.478739 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.478759 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.478778 1367 log.go:181] (0xc000bca500) (5) Data frame sent\nI1014 23:21:11.478789 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.478799 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.478814 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.486444 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.486464 1367 log.go:181] (0xc000bca500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.486496 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.486553 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.486577 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.486610 1367 log.go:181] (0xc000bca500) (5) Data frame sent\nI1014 23:21:11.491268 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.491284 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.491304 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.491779 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.491798 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.491819 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.491835 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.491847 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.491878 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.497845 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.497861 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.497873 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.498227 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.498260 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.498271 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.498283 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.498290 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.498297 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.501413 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.501432 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.501450 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.501592 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.501614 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.501624 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.501638 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.501645 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.501657 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.505150 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.505166 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.505182 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.505419 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.505440 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.505450 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.505477 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.505486 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.505494 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.511095 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.511115 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.511123 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.511850 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.511887 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.511898 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.511916 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.511946 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.511973 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.514943 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.514969 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.515002 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.515776 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.515790 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.515801 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.515811 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.515818 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.515826 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.520448 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.520461 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.520467 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.520982 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.521015 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.521028 1367 log.go:181] (0xc000bca500) (5) Data frame sent\nI1014 23:21:11.521038 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.521047 1367 log.go:181] (0xc000bca500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.521070 1367 log.go:181] (0xc000bca500) (5) Data frame sent\nI1014 23:21:11.521111 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.521137 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.521154 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.525126 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.525161 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.525172 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.525897 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.525924 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.525935 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.525950 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.525959 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.525967 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.529460 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.529473 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.529478 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.529896 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.529907 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.529916 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.529933 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.529942 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.529949 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.534022 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.534055 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.534083 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.534470 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.534495 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.534514 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.534549 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.534563 1367 log.go:181] (0xc000bca500) (5) Data frame sent\nI1014 23:21:11.534572 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.534580 1367 log.go:181] (0xc000bca500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.534598 1367 log.go:181] (0xc000bca500) (5) Data frame sent\nI1014 23:21:11.534610 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.539157 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.539181 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.539198 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.539684 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.539699 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.539705 1367 log.go:181] (0xc000bca500) (5) Data frame sent\nI1014 23:21:11.539710 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.539716 1367 log.go:181] (0xc000bca500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.539738 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.539799 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.539827 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.539858 1367 log.go:181] (0xc000bca500) (5) Data frame sent\nI1014 23:21:11.543470 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.543494 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.543503 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.544322 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.544351 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.544360 1367 log.go:181] (0xc000bca500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.102.119.31:80/\nI1014 23:21:11.544369 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.544391 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.544398 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.548619 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.548658 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.548690 1367 log.go:181] (0xc000d9a320) (3) Data frame sent\nI1014 23:21:11.549718 1367 log.go:181] (0xc00018c370) Data frame received for 5\nI1014 23:21:11.549742 1367 log.go:181] (0xc000bca500) (5) Data frame handling\nI1014 23:21:11.549939 1367 log.go:181] (0xc00018c370) Data frame received for 3\nI1014 23:21:11.549967 1367 log.go:181] (0xc000d9a320) (3) Data frame handling\nI1014 23:21:11.551474 1367 log.go:181] (0xc00018c370) Data frame received for 1\nI1014 23:21:11.551520 1367 log.go:181] (0xc000d9a280) (1) Data frame handling\nI1014 23:21:11.551551 1367 log.go:181] (0xc000d9a280) (1) Data frame sent\nI1014 23:21:11.551575 1367 log.go:181] (0xc00018c370) (0xc000d9a280) Stream removed, broadcasting: 1\nI1014 23:21:11.551595 1367 log.go:181] (0xc00018c370) Go away received\nI1014 23:21:11.552169 1367 log.go:181] (0xc00018c370) (0xc000d9a280) Stream removed, broadcasting: 1\nI1014 23:21:11.552207 1367 log.go:181] (0xc00018c370) (0xc000d9a320) Stream removed, broadcasting: 3\nI1014 23:21:11.552224 1367 log.go:181] (0xc00018c370) (0xc000bca500) Stream removed, broadcasting: 5\n" Oct 14 23:21:11.559: INFO: stdout: "\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj\naffinity-clusterip-x4hdj" Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Received response from host: affinity-clusterip-x4hdj Oct 14 23:21:11.559: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-4020, will wait for the garbage collector to delete the pods Oct 14 23:21:11.697: INFO: Deleting ReplicationController affinity-clusterip took: 7.057563ms Oct 14 23:21:11.997: INFO: Terminating ReplicationController affinity-clusterip pods took: 300.244861ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:21:19.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4020" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.032 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":105,"skipped":1575,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:21:19.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-f8568a77-235c-4203-b420-44bd2f93d1ad STEP: Creating a pod to test consume secrets Oct 14 23:21:19.740: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1733163a-065d-46ad-a873-a4b023f90986" in namespace "projected-2744" to be "Succeeded or Failed" Oct 14 23:21:19.747: INFO: Pod "pod-projected-secrets-1733163a-065d-46ad-a873-a4b023f90986": Phase="Pending", Reason="", readiness=false. Elapsed: 7.437ms Oct 14 23:21:21.994: INFO: Pod "pod-projected-secrets-1733163a-065d-46ad-a873-a4b023f90986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254647008s Oct 14 23:21:23.999: INFO: Pod "pod-projected-secrets-1733163a-065d-46ad-a873-a4b023f90986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.259649392s STEP: Saw pod success Oct 14 23:21:23.999: INFO: Pod "pod-projected-secrets-1733163a-065d-46ad-a873-a4b023f90986" satisfied condition "Succeeded or Failed" Oct 14 23:21:24.002: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-1733163a-065d-46ad-a873-a4b023f90986 container projected-secret-volume-test: STEP: delete the pod Oct 14 23:21:24.036: INFO: Waiting for pod pod-projected-secrets-1733163a-065d-46ad-a873-a4b023f90986 to disappear Oct 14 23:21:24.050: INFO: Pod pod-projected-secrets-1733163a-065d-46ad-a873-a4b023f90986 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:21:24.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2744" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":106,"skipped":1588,"failed":0} SSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:21:24.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 14 23:21:24.451: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 14 23:21:24.455: INFO: starting watch STEP: patching STEP: updating Oct 14 23:21:24.466: INFO: waiting for watch events with expected annotations Oct 14 23:21:24.466: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:21:24.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-9962" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":107,"skipped":1591,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:21:24.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 14 23:21:28.867: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:21:28.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9253" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1613,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:21:28.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 14 23:21:33.595: INFO: Successfully updated pod "labelsupdated3651285-f53c-4ae5-8e24-c28596cfa0e7" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:21:37.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9193" for this suite. • [SLOW TEST:8.745 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":109,"skipped":1616,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:21:37.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1014 23:21:50.588421 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 23:22:52.609: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 14 23:22:52.609: INFO: Deleting pod "simpletest-rc-to-be-deleted-28dxs" in namespace "gc-6707" Oct 14 23:22:52.673: INFO: Deleting pod "simpletest-rc-to-be-deleted-56nlj" in namespace "gc-6707" Oct 14 23:22:52.733: INFO: Deleting pod "simpletest-rc-to-be-deleted-6f28n" in namespace "gc-6707" Oct 14 23:22:53.080: INFO: Deleting pod "simpletest-rc-to-be-deleted-7n2x9" in namespace "gc-6707" Oct 14 23:22:53.297: INFO: Deleting pod "simpletest-rc-to-be-deleted-ff2mm" in namespace "gc-6707" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:22:53.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6707" for this suite. • [SLOW TEST:76.303 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":110,"skipped":1618,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:22:53.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 14 23:22:54.562: INFO: Waiting up to 5m0s for pod "downward-api-2c34f1a0-df73-425c-b43a-2c530d34afc5" in namespace "downward-api-1035" to be "Succeeded or Failed" Oct 14 23:22:54.602: INFO: Pod "downward-api-2c34f1a0-df73-425c-b43a-2c530d34afc5": Phase="Pending", Reason="", readiness=false. Elapsed: 39.754071ms Oct 14 23:22:56.606: INFO: Pod "downward-api-2c34f1a0-df73-425c-b43a-2c530d34afc5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043952751s Oct 14 23:22:58.609: INFO: Pod "downward-api-2c34f1a0-df73-425c-b43a-2c530d34afc5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047048226s STEP: Saw pod success Oct 14 23:22:58.609: INFO: Pod "downward-api-2c34f1a0-df73-425c-b43a-2c530d34afc5" satisfied condition "Succeeded or Failed" Oct 14 23:22:58.611: INFO: Trying to get logs from node leguer-worker2 pod downward-api-2c34f1a0-df73-425c-b43a-2c530d34afc5 container dapi-container: STEP: delete the pod Oct 14 23:22:58.650: INFO: Waiting for pod downward-api-2c34f1a0-df73-425c-b43a-2c530d34afc5 to disappear Oct 14 23:22:58.661: INFO: Pod downward-api-2c34f1a0-df73-425c-b43a-2c530d34afc5 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:22:58.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1035" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":1645,"failed":0} ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:22:58.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-446e5d38-b312-4701-88ef-50a39d90a857 STEP: Creating a pod to test consume secrets Oct 14 23:22:58.849: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-95aaa418-2b27-45f4-9a68-36eefa82afd8" in namespace "projected-2854" to be "Succeeded or Failed" Oct 14 23:22:59.026: INFO: Pod "pod-projected-secrets-95aaa418-2b27-45f4-9a68-36eefa82afd8": Phase="Pending", Reason="", readiness=false. Elapsed: 176.219169ms Oct 14 23:23:01.031: INFO: Pod "pod-projected-secrets-95aaa418-2b27-45f4-9a68-36eefa82afd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.181234496s Oct 14 23:23:03.035: INFO: Pod "pod-projected-secrets-95aaa418-2b27-45f4-9a68-36eefa82afd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.185956017s STEP: Saw pod success Oct 14 23:23:03.035: INFO: Pod "pod-projected-secrets-95aaa418-2b27-45f4-9a68-36eefa82afd8" satisfied condition "Succeeded or Failed" Oct 14 23:23:03.038: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-95aaa418-2b27-45f4-9a68-36eefa82afd8 container projected-secret-volume-test: STEP: delete the pod Oct 14 23:23:03.120: INFO: Waiting for pod pod-projected-secrets-95aaa418-2b27-45f4-9a68-36eefa82afd8 to disappear Oct 14 23:23:03.146: INFO: Pod pod-projected-secrets-95aaa418-2b27-45f4-9a68-36eefa82afd8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:23:03.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2854" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":112,"skipped":1645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:23:03.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:23:10.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4162" for this suite. • [SLOW TEST:7.156 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":113,"skipped":1675,"failed":0} SSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:23:10.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:23:10.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-582" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":114,"skipped":1680,"failed":0} ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:23:10.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-0fbe3f35-85ad-4fa0-8b21-dfba48767f5a in namespace container-probe-2416 Oct 14 23:23:14.637: INFO: Started pod liveness-0fbe3f35-85ad-4fa0-8b21-dfba48767f5a in namespace container-probe-2416 STEP: checking the pod's current state and verifying that restartCount is present Oct 14 23:23:14.639: INFO: Initial restart count of pod liveness-0fbe3f35-85ad-4fa0-8b21-dfba48767f5a is 0 Oct 14 23:23:32.683: INFO: Restart count of pod container-probe-2416/liveness-0fbe3f35-85ad-4fa0-8b21-dfba48767f5a is now 1 (18.044124678s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:23:32.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2416" for this suite. • [SLOW TEST:22.242 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1680,"failed":0} S ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:23:32.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:23:33.257: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:23:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6167" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":116,"skipped":1681,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:23:37.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-lv64 STEP: Creating a pod to test atomic-volume-subpath Oct 14 23:23:37.545: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lv64" in namespace "subpath-2926" to be "Succeeded or Failed" Oct 14 23:23:37.588: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Pending", Reason="", readiness=false. Elapsed: 42.811255ms Oct 14 23:23:39.673: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.127675067s Oct 14 23:23:41.677: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 4.131479452s Oct 14 23:23:43.681: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 6.135352836s Oct 14 23:23:45.684: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 8.138558952s Oct 14 23:23:47.715: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 10.169567182s Oct 14 23:23:49.721: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 12.175921079s Oct 14 23:23:51.724: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 14.178990788s Oct 14 23:23:53.728: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 16.182786176s Oct 14 23:23:55.732: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 18.186918195s Oct 14 23:23:57.736: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 20.190375608s Oct 14 23:23:59.739: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Running", Reason="", readiness=true. Elapsed: 22.193569212s Oct 14 23:24:01.743: INFO: Pod "pod-subpath-test-configmap-lv64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.197404448s STEP: Saw pod success Oct 14 23:24:01.743: INFO: Pod "pod-subpath-test-configmap-lv64" satisfied condition "Succeeded or Failed" Oct 14 23:24:01.746: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-configmap-lv64 container test-container-subpath-configmap-lv64: STEP: delete the pod Oct 14 23:24:01.820: INFO: Waiting for pod pod-subpath-test-configmap-lv64 to disappear Oct 14 23:24:01.836: INFO: Pod pod-subpath-test-configmap-lv64 no longer exists STEP: Deleting pod pod-subpath-test-configmap-lv64 Oct 14 23:24:01.836: INFO: Deleting pod "pod-subpath-test-configmap-lv64" in namespace "subpath-2926" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:24:01.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2926" for this suite. • [SLOW TEST:24.447 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":117,"skipped":1688,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:24:01.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:24:01.990: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a12b979d-f7ff-4f73-910b-00f259acf722" in namespace "projected-6763" to be "Succeeded or Failed" Oct 14 23:24:02.194: INFO: Pod "downwardapi-volume-a12b979d-f7ff-4f73-910b-00f259acf722": Phase="Pending", Reason="", readiness=false. Elapsed: 204.053569ms Oct 14 23:24:04.198: INFO: Pod "downwardapi-volume-a12b979d-f7ff-4f73-910b-00f259acf722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.208248053s Oct 14 23:24:06.202: INFO: Pod "downwardapi-volume-a12b979d-f7ff-4f73-910b-00f259acf722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.212254129s STEP: Saw pod success Oct 14 23:24:06.202: INFO: Pod "downwardapi-volume-a12b979d-f7ff-4f73-910b-00f259acf722" satisfied condition "Succeeded or Failed" Oct 14 23:24:06.205: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-a12b979d-f7ff-4f73-910b-00f259acf722 container client-container: STEP: delete the pod Oct 14 23:24:06.370: INFO: Waiting for pod downwardapi-volume-a12b979d-f7ff-4f73-910b-00f259acf722 to disappear Oct 14 23:24:06.419: INFO: Pod downwardapi-volume-a12b979d-f7ff-4f73-910b-00f259acf722 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:24:06.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6763" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":1732,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:24:06.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-88kh STEP: Creating a pod to test atomic-volume-subpath Oct 14 23:24:06.538: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-88kh" in namespace "subpath-4784" to be "Succeeded or Failed" Oct 14 23:24:06.551: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Pending", Reason="", readiness=false. Elapsed: 13.262193ms Oct 14 23:24:08.641: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103697657s Oct 14 23:24:10.646: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 4.108346213s Oct 14 23:24:12.650: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 6.112391869s Oct 14 23:24:14.661: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 8.122903097s Oct 14 23:24:16.665: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 10.127059542s Oct 14 23:24:18.672: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 12.133973677s Oct 14 23:24:20.676: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 14.138380284s Oct 14 23:24:22.680: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 16.142484294s Oct 14 23:24:24.691: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 18.153006943s Oct 14 23:24:26.703: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 20.165034812s Oct 14 23:24:28.706: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Running", Reason="", readiness=true. Elapsed: 22.168650157s Oct 14 23:24:30.710: INFO: Pod "pod-subpath-test-secret-88kh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.172726588s STEP: Saw pod success Oct 14 23:24:30.710: INFO: Pod "pod-subpath-test-secret-88kh" satisfied condition "Succeeded or Failed" Oct 14 23:24:30.719: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-secret-88kh container test-container-subpath-secret-88kh: STEP: delete the pod Oct 14 23:24:30.737: INFO: Waiting for pod pod-subpath-test-secret-88kh to disappear Oct 14 23:24:30.764: INFO: Pod pod-subpath-test-secret-88kh no longer exists STEP: Deleting pod pod-subpath-test-secret-88kh Oct 14 23:24:30.764: INFO: Deleting pod "pod-subpath-test-secret-88kh" in namespace "subpath-4784" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:24:30.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4784" for this suite. • [SLOW TEST:24.346 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":119,"skipped":1738,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:24:30.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2928 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 14 23:24:30.914: INFO: Found 0 stateful pods, waiting for 3 Oct 14 23:24:40.919: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:24:40.919: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:24:40.919: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 14 23:24:50.920: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:24:50.920: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:24:50.920: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 14 23:24:50.950: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 14 23:25:01.037: INFO: Updating stateful set ss2 Oct 14 23:25:01.078: INFO: Waiting for Pod statefulset-2928/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Oct 14 23:25:11.638: INFO: Found 2 stateful pods, waiting for 3 Oct 14 23:25:21.645: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:25:21.645: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:25:21.645: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 14 23:25:21.672: INFO: Updating stateful set ss2 Oct 14 23:25:21.703: INFO: Waiting for Pod statefulset-2928/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 14 23:25:31.733: INFO: Updating stateful set ss2 Oct 14 23:25:31.799: INFO: Waiting for StatefulSet statefulset-2928/ss2 to complete update Oct 14 23:25:31.799: INFO: Waiting for Pod statefulset-2928/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 23:25:41.807: INFO: Deleting all statefulset in ns statefulset-2928 Oct 14 23:25:41.810: INFO: Scaling statefulset ss2 to 0 Oct 14 23:26:11.838: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:26:11.842: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:26:11.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2928" for this suite. • [SLOW TEST:101.120 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":120,"skipped":1744,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:26:11.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-wxk7 STEP: Creating a pod to test atomic-volume-subpath Oct 14 23:26:11.987: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wxk7" in namespace "subpath-5312" to be "Succeeded or Failed" Oct 14 23:26:12.022: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.085191ms Oct 14 23:26:14.054: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067699436s Oct 14 23:26:16.059: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 4.072593706s Oct 14 23:26:18.063: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 6.076517964s Oct 14 23:26:20.072: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 8.085210579s Oct 14 23:26:22.077: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 10.090086648s Oct 14 23:26:24.081: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 12.094723125s Oct 14 23:26:26.086: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 14.099544421s Oct 14 23:26:28.091: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 16.103881442s Oct 14 23:26:30.096: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 18.108964755s Oct 14 23:26:32.101: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 20.114525381s Oct 14 23:26:34.106: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Running", Reason="", readiness=true. Elapsed: 22.119784513s Oct 14 23:26:36.111: INFO: Pod "pod-subpath-test-projected-wxk7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.12472656s STEP: Saw pod success Oct 14 23:26:36.111: INFO: Pod "pod-subpath-test-projected-wxk7" satisfied condition "Succeeded or Failed" Oct 14 23:26:36.114: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-projected-wxk7 container test-container-subpath-projected-wxk7: STEP: delete the pod Oct 14 23:26:36.143: INFO: Waiting for pod pod-subpath-test-projected-wxk7 to disappear Oct 14 23:26:36.151: INFO: Pod pod-subpath-test-projected-wxk7 no longer exists STEP: Deleting pod pod-subpath-test-projected-wxk7 Oct 14 23:26:36.151: INFO: Deleting pod "pod-subpath-test-projected-wxk7" in namespace "subpath-5312" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:26:36.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5312" for this suite. • [SLOW TEST:24.267 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":121,"skipped":1751,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:26:36.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:26:41.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2465" for this suite. • [SLOW TEST:5.357 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":122,"skipped":1753,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:26:41.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 14 23:26:49.682: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 14 23:26:49.719: INFO: Pod pod-with-prestop-http-hook still exists Oct 14 23:26:51.719: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 14 23:26:51.723: INFO: Pod pod-with-prestop-http-hook still exists Oct 14 23:26:53.719: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 14 23:26:53.722: INFO: Pod pod-with-prestop-http-hook still exists Oct 14 23:26:55.719: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 14 23:26:55.724: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:26:55.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6870" for this suite. • [SLOW TEST:14.253 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":1777,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:26:55.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:26:55.905: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ea77b31-b1a4-4795-9c93-29d7d052334b" in namespace "projected-3555" to be "Succeeded or Failed" Oct 14 23:26:55.921: INFO: Pod "downwardapi-volume-2ea77b31-b1a4-4795-9c93-29d7d052334b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.572666ms Oct 14 23:26:58.023: INFO: Pod "downwardapi-volume-2ea77b31-b1a4-4795-9c93-29d7d052334b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118458937s Oct 14 23:27:00.028: INFO: Pod "downwardapi-volume-2ea77b31-b1a4-4795-9c93-29d7d052334b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.123318265s STEP: Saw pod success Oct 14 23:27:00.028: INFO: Pod "downwardapi-volume-2ea77b31-b1a4-4795-9c93-29d7d052334b" satisfied condition "Succeeded or Failed" Oct 14 23:27:00.031: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-2ea77b31-b1a4-4795-9c93-29d7d052334b container client-container: STEP: delete the pod Oct 14 23:27:00.077: INFO: Waiting for pod downwardapi-volume-2ea77b31-b1a4-4795-9c93-29d7d052334b to disappear Oct 14 23:27:00.082: INFO: Pod downwardapi-volume-2ea77b31-b1a4-4795-9c93-29d7d052334b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:27:00.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3555" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":124,"skipped":1787,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:27:00.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 14 23:27:00.847: INFO: Pod name wrapped-volume-race-9c603ea2-0988-4f0a-bfd7-29f40854bea1: Found 0 pods out of 5 Oct 14 23:27:05.857: INFO: Pod name wrapped-volume-race-9c603ea2-0988-4f0a-bfd7-29f40854bea1: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9c603ea2-0988-4f0a-bfd7-29f40854bea1 in namespace emptydir-wrapper-4220, will wait for the garbage collector to delete the pods Oct 14 23:27:19.978: INFO: Deleting ReplicationController wrapped-volume-race-9c603ea2-0988-4f0a-bfd7-29f40854bea1 took: 42.464585ms Oct 14 23:27:20.378: INFO: Terminating ReplicationController wrapped-volume-race-9c603ea2-0988-4f0a-bfd7-29f40854bea1 pods took: 400.189717ms STEP: Creating RC which spawns configmap-volume pods Oct 14 23:27:30.640: INFO: Pod name wrapped-volume-race-83addc7f-55ef-4b27-a670-90d47e12d894: Found 0 pods out of 5 Oct 14 23:27:35.648: INFO: Pod name wrapped-volume-race-83addc7f-55ef-4b27-a670-90d47e12d894: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-83addc7f-55ef-4b27-a670-90d47e12d894 in namespace emptydir-wrapper-4220, will wait for the garbage collector to delete the pods Oct 14 23:27:47.807: INFO: Deleting ReplicationController wrapped-volume-race-83addc7f-55ef-4b27-a670-90d47e12d894 took: 7.697406ms Oct 14 23:27:48.207: INFO: Terminating ReplicationController wrapped-volume-race-83addc7f-55ef-4b27-a670-90d47e12d894 pods took: 400.206128ms STEP: Creating RC which spawns configmap-volume pods Oct 14 23:27:59.851: INFO: Pod name wrapped-volume-race-4363fd94-fc12-456b-a205-7e8000ade733: Found 0 pods out of 5 Oct 14 23:28:04.858: INFO: Pod name wrapped-volume-race-4363fd94-fc12-456b-a205-7e8000ade733: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4363fd94-fc12-456b-a205-7e8000ade733 in namespace emptydir-wrapper-4220, will wait for the garbage collector to delete the pods Oct 14 23:28:18.977: INFO: Deleting ReplicationController wrapped-volume-race-4363fd94-fc12-456b-a205-7e8000ade733 took: 42.902628ms Oct 14 23:28:19.478: INFO: Terminating ReplicationController wrapped-volume-race-4363fd94-fc12-456b-a205-7e8000ade733 pods took: 500.283958ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:28:30.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-4220" for this suite. • [SLOW TEST:90.366 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":125,"skipped":1801,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:28:30.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Oct 14 23:28:30.530: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f -' Oct 14 23:28:33.859: INFO: stderr: "" Oct 14 23:28:33.859: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 14 23:28:33.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config diff -f -' Oct 14 23:28:34.367: INFO: rc: 1 Oct 14 23:28:34.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete -f -' Oct 14 23:28:34.532: INFO: stderr: "" Oct 14 23:28:34.532: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:28:34.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4880" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":126,"skipped":1825,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:28:34.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 14 23:28:34.609: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:34.621: INFO: Number of nodes with available pods: 0 Oct 14 23:28:34.621: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:28:35.870: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:35.931: INFO: Number of nodes with available pods: 0 Oct 14 23:28:35.931: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:28:36.863: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:36.877: INFO: Number of nodes with available pods: 0 Oct 14 23:28:36.877: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:28:37.722: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:37.914: INFO: Number of nodes with available pods: 0 Oct 14 23:28:37.914: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:28:38.643: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:38.800: INFO: Number of nodes with available pods: 0 Oct 14 23:28:38.800: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:28:39.633: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:39.638: INFO: Number of nodes with available pods: 1 Oct 14 23:28:39.638: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:28:40.634: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:40.665: INFO: Number of nodes with available pods: 2 Oct 14 23:28:40.665: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 14 23:28:40.798: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:40.832: INFO: Number of nodes with available pods: 1 Oct 14 23:28:40.832: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:28:41.837: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:41.854: INFO: Number of nodes with available pods: 1 Oct 14 23:28:41.854: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:28:42.845: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:43.025: INFO: Number of nodes with available pods: 1 Oct 14 23:28:43.025: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:28:43.849: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:43.893: INFO: Number of nodes with available pods: 1 Oct 14 23:28:43.893: INFO: Node leguer-worker2 is running more than one daemon pod Oct 14 23:28:44.837: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:28:44.841: INFO: Number of nodes with available pods: 2 Oct 14 23:28:44.841: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8951, will wait for the garbage collector to delete the pods Oct 14 23:28:44.919: INFO: Deleting DaemonSet.extensions daemon-set took: 20.218032ms Oct 14 23:28:45.019: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.300593ms Oct 14 23:28:50.323: INFO: Number of nodes with available pods: 0 Oct 14 23:28:50.323: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 23:28:50.325: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8951/daemonsets","resourceVersion":"2954779"},"items":null} Oct 14 23:28:50.328: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8951/pods","resourceVersion":"2954779"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:28:50.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8951" for this suite. • [SLOW TEST:15.805 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":127,"skipped":1835,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:28:50.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-7fd4a5e9-30f7-452f-ac27-d0c6d5dde07e STEP: Creating a pod to test consume secrets Oct 14 23:28:50.604: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9ebf5322-ea91-41d0-a90f-b5b285a6e1a2" in namespace "projected-2741" to be "Succeeded or Failed" Oct 14 23:28:50.644: INFO: Pod "pod-projected-secrets-9ebf5322-ea91-41d0-a90f-b5b285a6e1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 39.327871ms Oct 14 23:28:52.648: INFO: Pod "pod-projected-secrets-9ebf5322-ea91-41d0-a90f-b5b285a6e1a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043621543s Oct 14 23:28:54.653: INFO: Pod "pod-projected-secrets-9ebf5322-ea91-41d0-a90f-b5b285a6e1a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04824001s STEP: Saw pod success Oct 14 23:28:54.653: INFO: Pod "pod-projected-secrets-9ebf5322-ea91-41d0-a90f-b5b285a6e1a2" satisfied condition "Succeeded or Failed" Oct 14 23:28:54.656: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-9ebf5322-ea91-41d0-a90f-b5b285a6e1a2 container projected-secret-volume-test: STEP: delete the pod Oct 14 23:28:54.818: INFO: Waiting for pod pod-projected-secrets-9ebf5322-ea91-41d0-a90f-b5b285a6e1a2 to disappear Oct 14 23:28:54.826: INFO: Pod pod-projected-secrets-9ebf5322-ea91-41d0-a90f-b5b285a6e1a2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:28:54.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2741" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":128,"skipped":1855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:28:54.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:28:59.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8061" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":1878,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:28:59.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2992 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2992 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2992 Oct 14 23:28:59.614: INFO: Found 0 stateful pods, waiting for 1 Oct 14 23:29:09.617: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 14 23:29:09.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2992 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:29:09.889: INFO: stderr: "I1014 23:29:09.748737 1440 log.go:181] (0xc00003a160) (0xc00082c320) Create stream\nI1014 23:29:09.748790 1440 log.go:181] (0xc00003a160) (0xc00082c320) Stream added, broadcasting: 1\nI1014 23:29:09.752590 1440 log.go:181] (0xc00003a160) Reply frame received for 1\nI1014 23:29:09.752623 1440 log.go:181] (0xc00003a160) (0xc000a40320) Create stream\nI1014 23:29:09.752632 1440 log.go:181] (0xc00003a160) (0xc000a40320) Stream added, broadcasting: 3\nI1014 23:29:09.753822 1440 log.go:181] (0xc00003a160) Reply frame received for 3\nI1014 23:29:09.753864 1440 log.go:181] (0xc00003a160) (0xc000a40c80) Create stream\nI1014 23:29:09.753875 1440 log.go:181] (0xc00003a160) (0xc000a40c80) Stream added, broadcasting: 5\nI1014 23:29:09.754749 1440 log.go:181] (0xc00003a160) Reply frame received for 5\nI1014 23:29:09.849474 1440 log.go:181] (0xc00003a160) Data frame received for 5\nI1014 23:29:09.849499 1440 log.go:181] (0xc000a40c80) (5) Data frame handling\nI1014 23:29:09.849515 1440 log.go:181] (0xc000a40c80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:29:09.881626 1440 log.go:181] (0xc00003a160) Data frame received for 5\nI1014 23:29:09.881682 1440 log.go:181] (0xc000a40c80) (5) Data frame handling\nI1014 23:29:09.881734 1440 log.go:181] (0xc00003a160) Data frame received for 3\nI1014 23:29:09.881770 1440 log.go:181] (0xc000a40320) (3) Data frame handling\nI1014 23:29:09.881880 1440 log.go:181] (0xc000a40320) (3) Data frame sent\nI1014 23:29:09.882123 1440 log.go:181] (0xc00003a160) Data frame received for 3\nI1014 23:29:09.882153 1440 log.go:181] (0xc000a40320) (3) Data frame handling\nI1014 23:29:09.883812 1440 log.go:181] (0xc00003a160) Data frame received for 1\nI1014 23:29:09.883849 1440 log.go:181] (0xc00082c320) (1) Data frame handling\nI1014 23:29:09.883869 1440 log.go:181] (0xc00082c320) (1) Data frame sent\nI1014 23:29:09.883898 1440 log.go:181] (0xc00003a160) (0xc00082c320) Stream removed, broadcasting: 1\nI1014 23:29:09.883926 1440 log.go:181] (0xc00003a160) Go away received\nI1014 23:29:09.884473 1440 log.go:181] (0xc00003a160) (0xc00082c320) Stream removed, broadcasting: 1\nI1014 23:29:09.884506 1440 log.go:181] (0xc00003a160) (0xc000a40320) Stream removed, broadcasting: 3\nI1014 23:29:09.884519 1440 log.go:181] (0xc00003a160) (0xc000a40c80) Stream removed, broadcasting: 5\n" Oct 14 23:29:09.889: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:29:09.889: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:29:09.894: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 14 23:29:19.899: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 14 23:29:19.899: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:29:19.948: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999647s Oct 14 23:29:20.974: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.96369802s Oct 14 23:29:21.979: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.937609459s Oct 14 23:29:22.983: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.932841932s Oct 14 23:29:23.988: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.928266309s Oct 14 23:29:24.992: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.92381113s Oct 14 23:29:25.997: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.919163177s Oct 14 23:29:27.001: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.914783339s Oct 14 23:29:28.005: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.910200662s Oct 14 23:29:29.010: INFO: Verifying statefulset ss doesn't scale past 1 for another 906.099514ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2992 Oct 14 23:29:30.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:29:30.279: INFO: stderr: "I1014 23:29:30.192715 1458 log.go:181] (0xc00003a0b0) (0xc000436e60) Create stream\nI1014 23:29:30.192792 1458 log.go:181] (0xc00003a0b0) (0xc000436e60) Stream added, broadcasting: 1\nI1014 23:29:30.195174 1458 log.go:181] (0xc00003a0b0) Reply frame received for 1\nI1014 23:29:30.195210 1458 log.go:181] (0xc00003a0b0) (0xc000376640) Create stream\nI1014 23:29:30.195220 1458 log.go:181] (0xc00003a0b0) (0xc000376640) Stream added, broadcasting: 3\nI1014 23:29:30.196180 1458 log.go:181] (0xc00003a0b0) Reply frame received for 3\nI1014 23:29:30.196220 1458 log.go:181] (0xc00003a0b0) (0xc0004375e0) Create stream\nI1014 23:29:30.196234 1458 log.go:181] (0xc00003a0b0) (0xc0004375e0) Stream added, broadcasting: 5\nI1014 23:29:30.197583 1458 log.go:181] (0xc00003a0b0) Reply frame received for 5\nI1014 23:29:30.272071 1458 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1014 23:29:30.272107 1458 log.go:181] (0xc000376640) (3) Data frame handling\nI1014 23:29:30.272127 1458 log.go:181] (0xc000376640) (3) Data frame sent\nI1014 23:29:30.272135 1458 log.go:181] (0xc00003a0b0) Data frame received for 3\nI1014 23:29:30.272141 1458 log.go:181] (0xc000376640) (3) Data frame handling\nI1014 23:29:30.272172 1458 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1014 23:29:30.272201 1458 log.go:181] (0xc0004375e0) (5) Data frame handling\nI1014 23:29:30.272232 1458 log.go:181] (0xc0004375e0) (5) Data frame sent\nI1014 23:29:30.272245 1458 log.go:181] (0xc00003a0b0) Data frame received for 5\nI1014 23:29:30.272255 1458 log.go:181] (0xc0004375e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 23:29:30.273732 1458 log.go:181] (0xc00003a0b0) Data frame received for 1\nI1014 23:29:30.273751 1458 log.go:181] (0xc000436e60) (1) Data frame handling\nI1014 23:29:30.273769 1458 log.go:181] (0xc000436e60) (1) Data frame sent\nI1014 23:29:30.273805 1458 log.go:181] (0xc00003a0b0) (0xc000436e60) Stream removed, broadcasting: 1\nI1014 23:29:30.273956 1458 log.go:181] (0xc00003a0b0) Go away received\nI1014 23:29:30.274157 1458 log.go:181] (0xc00003a0b0) (0xc000436e60) Stream removed, broadcasting: 1\nI1014 23:29:30.274179 1458 log.go:181] (0xc00003a0b0) (0xc000376640) Stream removed, broadcasting: 3\nI1014 23:29:30.274195 1458 log.go:181] (0xc00003a0b0) (0xc0004375e0) Stream removed, broadcasting: 5\n" Oct 14 23:29:30.279: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:29:30.279: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:29:30.283: INFO: Found 1 stateful pods, waiting for 3 Oct 14 23:29:40.289: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:29:40.289: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:29:40.289: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 14 23:29:40.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2992 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:29:40.510: INFO: stderr: "I1014 23:29:40.440762 1476 log.go:181] (0xc0004ee000) (0xc0001e41e0) Create stream\nI1014 23:29:40.440824 1476 log.go:181] (0xc0004ee000) (0xc0001e41e0) Stream added, broadcasting: 1\nI1014 23:29:40.442909 1476 log.go:181] (0xc0004ee000) Reply frame received for 1\nI1014 23:29:40.442959 1476 log.go:181] (0xc0004ee000) (0xc000a223c0) Create stream\nI1014 23:29:40.442976 1476 log.go:181] (0xc0004ee000) (0xc000a223c0) Stream added, broadcasting: 3\nI1014 23:29:40.443926 1476 log.go:181] (0xc0004ee000) Reply frame received for 3\nI1014 23:29:40.443962 1476 log.go:181] (0xc0004ee000) (0xc0001e4280) Create stream\nI1014 23:29:40.443977 1476 log.go:181] (0xc0004ee000) (0xc0001e4280) Stream added, broadcasting: 5\nI1014 23:29:40.445016 1476 log.go:181] (0xc0004ee000) Reply frame received for 5\nI1014 23:29:40.503090 1476 log.go:181] (0xc0004ee000) Data frame received for 3\nI1014 23:29:40.503118 1476 log.go:181] (0xc000a223c0) (3) Data frame handling\nI1014 23:29:40.503137 1476 log.go:181] (0xc000a223c0) (3) Data frame sent\nI1014 23:29:40.503153 1476 log.go:181] (0xc0004ee000) Data frame received for 3\nI1014 23:29:40.503164 1476 log.go:181] (0xc000a223c0) (3) Data frame handling\nI1014 23:29:40.503356 1476 log.go:181] (0xc0004ee000) Data frame received for 5\nI1014 23:29:40.503382 1476 log.go:181] (0xc0001e4280) (5) Data frame handling\nI1014 23:29:40.503400 1476 log.go:181] (0xc0001e4280) (5) Data frame sent\nI1014 23:29:40.503411 1476 log.go:181] (0xc0004ee000) Data frame received for 5\nI1014 23:29:40.503421 1476 log.go:181] (0xc0001e4280) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:29:40.504673 1476 log.go:181] (0xc0004ee000) Data frame received for 1\nI1014 23:29:40.504704 1476 log.go:181] (0xc0001e41e0) (1) Data frame handling\nI1014 23:29:40.504724 1476 log.go:181] (0xc0001e41e0) (1) Data frame sent\nI1014 23:29:40.504738 1476 log.go:181] (0xc0004ee000) (0xc0001e41e0) Stream removed, broadcasting: 1\nI1014 23:29:40.504754 1476 log.go:181] (0xc0004ee000) Go away received\nI1014 23:29:40.505320 1476 log.go:181] (0xc0004ee000) (0xc0001e41e0) Stream removed, broadcasting: 1\nI1014 23:29:40.505344 1476 log.go:181] (0xc0004ee000) (0xc000a223c0) Stream removed, broadcasting: 3\nI1014 23:29:40.505357 1476 log.go:181] (0xc0004ee000) (0xc0001e4280) Stream removed, broadcasting: 5\n" Oct 14 23:29:40.510: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:29:40.511: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:29:40.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2992 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:29:40.770: INFO: stderr: "I1014 23:29:40.646379 1494 log.go:181] (0xc000f59080) (0xc000289b80) Create stream\nI1014 23:29:40.646427 1494 log.go:181] (0xc000f59080) (0xc000289b80) Stream added, broadcasting: 1\nI1014 23:29:40.651810 1494 log.go:181] (0xc000f59080) Reply frame received for 1\nI1014 23:29:40.651866 1494 log.go:181] (0xc000f59080) (0xc000a3e0a0) Create stream\nI1014 23:29:40.651880 1494 log.go:181] (0xc000f59080) (0xc000a3e0a0) Stream added, broadcasting: 3\nI1014 23:29:40.652777 1494 log.go:181] (0xc000f59080) Reply frame received for 3\nI1014 23:29:40.652807 1494 log.go:181] (0xc000f59080) (0xc0001a0820) Create stream\nI1014 23:29:40.652816 1494 log.go:181] (0xc000f59080) (0xc0001a0820) Stream added, broadcasting: 5\nI1014 23:29:40.653821 1494 log.go:181] (0xc000f59080) Reply frame received for 5\nI1014 23:29:40.715307 1494 log.go:181] (0xc000f59080) Data frame received for 5\nI1014 23:29:40.715336 1494 log.go:181] (0xc0001a0820) (5) Data frame handling\nI1014 23:29:40.715363 1494 log.go:181] (0xc0001a0820) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:29:40.759861 1494 log.go:181] (0xc000f59080) Data frame received for 3\nI1014 23:29:40.759905 1494 log.go:181] (0xc000a3e0a0) (3) Data frame handling\nI1014 23:29:40.759928 1494 log.go:181] (0xc000a3e0a0) (3) Data frame sent\nI1014 23:29:40.759952 1494 log.go:181] (0xc000f59080) Data frame received for 5\nI1014 23:29:40.759967 1494 log.go:181] (0xc0001a0820) (5) Data frame handling\nI1014 23:29:40.760238 1494 log.go:181] (0xc000f59080) Data frame received for 3\nI1014 23:29:40.760263 1494 log.go:181] (0xc000a3e0a0) (3) Data frame handling\nI1014 23:29:40.762346 1494 log.go:181] (0xc000f59080) Data frame received for 1\nI1014 23:29:40.762365 1494 log.go:181] (0xc000289b80) (1) Data frame handling\nI1014 23:29:40.762388 1494 log.go:181] (0xc000289b80) (1) Data frame sent\nI1014 23:29:40.762400 1494 log.go:181] (0xc000f59080) (0xc000289b80) Stream removed, broadcasting: 1\nI1014 23:29:40.762417 1494 log.go:181] (0xc000f59080) Go away received\nI1014 23:29:40.762909 1494 log.go:181] (0xc000f59080) (0xc000289b80) Stream removed, broadcasting: 1\nI1014 23:29:40.762941 1494 log.go:181] (0xc000f59080) (0xc000a3e0a0) Stream removed, broadcasting: 3\nI1014 23:29:40.762956 1494 log.go:181] (0xc000f59080) (0xc0001a0820) Stream removed, broadcasting: 5\n" Oct 14 23:29:40.770: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:29:40.770: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:29:40.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2992 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:29:41.014: INFO: stderr: "I1014 23:29:40.906615 1512 log.go:181] (0xc000d41080) (0xc0003d5a40) Create stream\nI1014 23:29:40.906684 1512 log.go:181] (0xc000d41080) (0xc0003d5a40) Stream added, broadcasting: 1\nI1014 23:29:40.913287 1512 log.go:181] (0xc000d41080) Reply frame received for 1\nI1014 23:29:40.913339 1512 log.go:181] (0xc000d41080) (0xc000ca0000) Create stream\nI1014 23:29:40.913354 1512 log.go:181] (0xc000d41080) (0xc000ca0000) Stream added, broadcasting: 3\nI1014 23:29:40.915260 1512 log.go:181] (0xc000d41080) Reply frame received for 3\nI1014 23:29:40.915300 1512 log.go:181] (0xc000d41080) (0xc000599720) Create stream\nI1014 23:29:40.915320 1512 log.go:181] (0xc000d41080) (0xc000599720) Stream added, broadcasting: 5\nI1014 23:29:40.916379 1512 log.go:181] (0xc000d41080) Reply frame received for 5\nI1014 23:29:40.977332 1512 log.go:181] (0xc000d41080) Data frame received for 5\nI1014 23:29:40.977371 1512 log.go:181] (0xc000599720) (5) Data frame handling\nI1014 23:29:40.977403 1512 log.go:181] (0xc000599720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:29:41.005410 1512 log.go:181] (0xc000d41080) Data frame received for 3\nI1014 23:29:41.005452 1512 log.go:181] (0xc000ca0000) (3) Data frame handling\nI1014 23:29:41.005487 1512 log.go:181] (0xc000ca0000) (3) Data frame sent\nI1014 23:29:41.005861 1512 log.go:181] (0xc000d41080) Data frame received for 5\nI1014 23:29:41.005910 1512 log.go:181] (0xc000599720) (5) Data frame handling\nI1014 23:29:41.005941 1512 log.go:181] (0xc000d41080) Data frame received for 3\nI1014 23:29:41.005963 1512 log.go:181] (0xc000ca0000) (3) Data frame handling\nI1014 23:29:41.008221 1512 log.go:181] (0xc000d41080) Data frame received for 1\nI1014 23:29:41.008263 1512 log.go:181] (0xc0003d5a40) (1) Data frame handling\nI1014 23:29:41.008300 1512 log.go:181] (0xc0003d5a40) (1) Data frame sent\nI1014 23:29:41.008343 1512 log.go:181] (0xc000d41080) (0xc0003d5a40) Stream removed, broadcasting: 1\nI1014 23:29:41.008371 1512 log.go:181] (0xc000d41080) Go away received\nI1014 23:29:41.009244 1512 log.go:181] (0xc000d41080) (0xc0003d5a40) Stream removed, broadcasting: 1\nI1014 23:29:41.009281 1512 log.go:181] (0xc000d41080) (0xc000ca0000) Stream removed, broadcasting: 3\nI1014 23:29:41.009296 1512 log.go:181] (0xc000d41080) (0xc000599720) Stream removed, broadcasting: 5\n" Oct 14 23:29:41.014: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:29:41.014: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:29:41.014: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:29:41.017: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Oct 14 23:29:51.026: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 14 23:29:51.026: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 14 23:29:51.026: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 14 23:29:51.053: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999702s Oct 14 23:29:52.067: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98230113s Oct 14 23:29:53.071: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968475362s Oct 14 23:29:54.086: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.964562241s Oct 14 23:29:55.091: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.950002961s Oct 14 23:29:56.096: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.94481481s Oct 14 23:29:57.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.939794492s Oct 14 23:29:58.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.935279066s Oct 14 23:29:59.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.907751513s Oct 14 23:30:00.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 902.787259ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2992 Oct 14 23:30:01.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2992 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:30:01.400: INFO: stderr: "I1014 23:30:01.296198 1530 log.go:181] (0xc0001bf8c0) (0xc000d18aa0) Create stream\nI1014 23:30:01.296262 1530 log.go:181] (0xc0001bf8c0) (0xc000d18aa0) Stream added, broadcasting: 1\nI1014 23:30:01.300293 1530 log.go:181] (0xc0001bf8c0) Reply frame received for 1\nI1014 23:30:01.300351 1530 log.go:181] (0xc0001bf8c0) (0xc000d18000) Create stream\nI1014 23:30:01.300368 1530 log.go:181] (0xc0001bf8c0) (0xc000d18000) Stream added, broadcasting: 3\nI1014 23:30:01.301459 1530 log.go:181] (0xc0001bf8c0) Reply frame received for 3\nI1014 23:30:01.301503 1530 log.go:181] (0xc0001bf8c0) (0xc00052e140) Create stream\nI1014 23:30:01.301519 1530 log.go:181] (0xc0001bf8c0) (0xc00052e140) Stream added, broadcasting: 5\nI1014 23:30:01.302481 1530 log.go:181] (0xc0001bf8c0) Reply frame received for 5\nI1014 23:30:01.388613 1530 log.go:181] (0xc0001bf8c0) Data frame received for 3\nI1014 23:30:01.388647 1530 log.go:181] (0xc000d18000) (3) Data frame handling\nI1014 23:30:01.388657 1530 log.go:181] (0xc000d18000) (3) Data frame sent\nI1014 23:30:01.388664 1530 log.go:181] (0xc0001bf8c0) Data frame received for 3\nI1014 23:30:01.388670 1530 log.go:181] (0xc000d18000) (3) Data frame handling\nI1014 23:30:01.388705 1530 log.go:181] (0xc0001bf8c0) Data frame received for 5\nI1014 23:30:01.388763 1530 log.go:181] (0xc00052e140) (5) Data frame handling\nI1014 23:30:01.388799 1530 log.go:181] (0xc00052e140) (5) Data frame sent\nI1014 23:30:01.388818 1530 log.go:181] (0xc0001bf8c0) Data frame received for 5\nI1014 23:30:01.388941 1530 log.go:181] (0xc00052e140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 23:30:01.394684 1530 log.go:181] (0xc0001bf8c0) Data frame received for 1\nI1014 23:30:01.394703 1530 log.go:181] (0xc000d18aa0) (1) Data frame handling\nI1014 23:30:01.394720 1530 log.go:181] (0xc000d18aa0) (1) Data frame sent\nI1014 23:30:01.394739 1530 log.go:181] (0xc0001bf8c0) (0xc000d18aa0) Stream removed, broadcasting: 1\nI1014 23:30:01.394920 1530 log.go:181] (0xc0001bf8c0) Go away received\nI1014 23:30:01.395112 1530 log.go:181] (0xc0001bf8c0) (0xc000d18aa0) Stream removed, broadcasting: 1\nI1014 23:30:01.395128 1530 log.go:181] (0xc0001bf8c0) (0xc000d18000) Stream removed, broadcasting: 3\nI1014 23:30:01.395136 1530 log.go:181] (0xc0001bf8c0) (0xc00052e140) Stream removed, broadcasting: 5\n" Oct 14 23:30:01.400: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:30:01.400: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:30:01.400: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2992 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:30:01.589: INFO: stderr: "I1014 23:30:01.525951 1548 log.go:181] (0xc0008dde40) (0xc0005acfa0) Create stream\nI1014 23:30:01.525995 1548 log.go:181] (0xc0008dde40) (0xc0005acfa0) Stream added, broadcasting: 1\nI1014 23:30:01.527955 1548 log.go:181] (0xc0008dde40) Reply frame received for 1\nI1014 23:30:01.528003 1548 log.go:181] (0xc0008dde40) (0xc000616320) Create stream\nI1014 23:30:01.528021 1548 log.go:181] (0xc0008dde40) (0xc000616320) Stream added, broadcasting: 3\nI1014 23:30:01.528686 1548 log.go:181] (0xc0008dde40) Reply frame received for 3\nI1014 23:30:01.528708 1548 log.go:181] (0xc0008dde40) (0xc0008d4b40) Create stream\nI1014 23:30:01.528717 1548 log.go:181] (0xc0008dde40) (0xc0008d4b40) Stream added, broadcasting: 5\nI1014 23:30:01.529507 1548 log.go:181] (0xc0008dde40) Reply frame received for 5\nI1014 23:30:01.582106 1548 log.go:181] (0xc0008dde40) Data frame received for 5\nI1014 23:30:01.582142 1548 log.go:181] (0xc0008d4b40) (5) Data frame handling\nI1014 23:30:01.582151 1548 log.go:181] (0xc0008d4b40) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 23:30:01.582184 1548 log.go:181] (0xc0008dde40) Data frame received for 3\nI1014 23:30:01.582215 1548 log.go:181] (0xc000616320) (3) Data frame handling\nI1014 23:30:01.582233 1548 log.go:181] (0xc000616320) (3) Data frame sent\nI1014 23:30:01.582246 1548 log.go:181] (0xc0008dde40) Data frame received for 3\nI1014 23:30:01.582255 1548 log.go:181] (0xc000616320) (3) Data frame handling\nI1014 23:30:01.582320 1548 log.go:181] (0xc0008dde40) Data frame received for 5\nI1014 23:30:01.582335 1548 log.go:181] (0xc0008d4b40) (5) Data frame handling\nI1014 23:30:01.584052 1548 log.go:181] (0xc0008dde40) Data frame received for 1\nI1014 23:30:01.584075 1548 log.go:181] (0xc0005acfa0) (1) Data frame handling\nI1014 23:30:01.584100 1548 log.go:181] (0xc0005acfa0) (1) Data frame sent\nI1014 23:30:01.584118 1548 log.go:181] (0xc0008dde40) (0xc0005acfa0) Stream removed, broadcasting: 1\nI1014 23:30:01.584156 1548 log.go:181] (0xc0008dde40) Go away received\nI1014 23:30:01.584487 1548 log.go:181] (0xc0008dde40) (0xc0005acfa0) Stream removed, broadcasting: 1\nI1014 23:30:01.584520 1548 log.go:181] (0xc0008dde40) (0xc000616320) Stream removed, broadcasting: 3\nI1014 23:30:01.584536 1548 log.go:181] (0xc0008dde40) (0xc0008d4b40) Stream removed, broadcasting: 5\n" Oct 14 23:30:01.589: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:30:01.589: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:30:01.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2992 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:30:01.796: INFO: stderr: "I1014 23:30:01.720115 1566 log.go:181] (0xc000f1ce70) (0xc00052ca00) Create stream\nI1014 23:30:01.720169 1566 log.go:181] (0xc000f1ce70) (0xc00052ca00) Stream added, broadcasting: 1\nI1014 23:30:01.724262 1566 log.go:181] (0xc000f1ce70) Reply frame received for 1\nI1014 23:30:01.724346 1566 log.go:181] (0xc000f1ce70) (0xc0000ce000) Create stream\nI1014 23:30:01.724385 1566 log.go:181] (0xc000f1ce70) (0xc0000ce000) Stream added, broadcasting: 3\nI1014 23:30:01.725513 1566 log.go:181] (0xc000f1ce70) Reply frame received for 3\nI1014 23:30:01.725538 1566 log.go:181] (0xc000f1ce70) (0xc00019cf00) Create stream\nI1014 23:30:01.725545 1566 log.go:181] (0xc000f1ce70) (0xc00019cf00) Stream added, broadcasting: 5\nI1014 23:30:01.726358 1566 log.go:181] (0xc000f1ce70) Reply frame received for 5\nI1014 23:30:01.789752 1566 log.go:181] (0xc000f1ce70) Data frame received for 3\nI1014 23:30:01.789810 1566 log.go:181] (0xc0000ce000) (3) Data frame handling\nI1014 23:30:01.789838 1566 log.go:181] (0xc0000ce000) (3) Data frame sent\nI1014 23:30:01.789857 1566 log.go:181] (0xc000f1ce70) Data frame received for 3\nI1014 23:30:01.789876 1566 log.go:181] (0xc0000ce000) (3) Data frame handling\nI1014 23:30:01.789898 1566 log.go:181] (0xc000f1ce70) Data frame received for 5\nI1014 23:30:01.789926 1566 log.go:181] (0xc00019cf00) (5) Data frame handling\nI1014 23:30:01.789941 1566 log.go:181] (0xc00019cf00) (5) Data frame sent\nI1014 23:30:01.789957 1566 log.go:181] (0xc000f1ce70) Data frame received for 5\nI1014 23:30:01.789968 1566 log.go:181] (0xc00019cf00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 23:30:01.791031 1566 log.go:181] (0xc000f1ce70) Data frame received for 1\nI1014 23:30:01.791060 1566 log.go:181] (0xc00052ca00) (1) Data frame handling\nI1014 23:30:01.791093 1566 log.go:181] (0xc00052ca00) (1) Data frame sent\nI1014 23:30:01.791184 1566 log.go:181] (0xc000f1ce70) (0xc00052ca00) Stream removed, broadcasting: 1\nI1014 23:30:01.791207 1566 log.go:181] (0xc000f1ce70) Go away received\nI1014 23:30:01.791535 1566 log.go:181] (0xc000f1ce70) (0xc00052ca00) Stream removed, broadcasting: 1\nI1014 23:30:01.791550 1566 log.go:181] (0xc000f1ce70) (0xc0000ce000) Stream removed, broadcasting: 3\nI1014 23:30:01.791557 1566 log.go:181] (0xc000f1ce70) (0xc00019cf00) Stream removed, broadcasting: 5\n" Oct 14 23:30:01.796: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:30:01.796: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:30:01.796: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 23:30:21.811: INFO: Deleting all statefulset in ns statefulset-2992 Oct 14 23:30:21.815: INFO: Scaling statefulset ss to 0 Oct 14 23:30:21.826: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:30:21.828: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:30:21.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2992" for this suite. • [SLOW TEST:82.613 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":130,"skipped":1906,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:30:21.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-7d8f9017-b511-4008-8c3b-a2d833fddb60 STEP: Creating configMap with name cm-test-opt-upd-5a21e513-cc86-48c2-8cdf-8e9fa37e66b7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7d8f9017-b511-4008-8c3b-a2d833fddb60 STEP: Updating configmap cm-test-opt-upd-5a21e513-cc86-48c2-8cdf-8e9fa37e66b7 STEP: Creating configMap with name cm-test-opt-create-f9b9d9c3-9c45-430d-bca1-18f8b9bf149b STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:30:30.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8526" for this suite. • [SLOW TEST:8.244 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":131,"skipped":1926,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:30:30.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 14 23:30:30.207: INFO: Waiting up to 1m0s for all nodes to be ready Oct 14 23:31:30.231: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 14 23:31:30.296: INFO: Created pod: pod0-sched-preemption-low-priority Oct 14 23:31:30.344: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:31:54.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-7338" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:84.443 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":132,"skipped":1978,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:31:54.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 14 23:31:54.590: INFO: Waiting up to 5m0s for pod "pod-d46d6aa4-86ea-4f75-afd5-49513650274b" in namespace "emptydir-7324" to be "Succeeded or Failed" Oct 14 23:31:54.603: INFO: Pod "pod-d46d6aa4-86ea-4f75-afd5-49513650274b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.697901ms Oct 14 23:31:56.608: INFO: Pod "pod-d46d6aa4-86ea-4f75-afd5-49513650274b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017183445s Oct 14 23:31:58.612: INFO: Pod "pod-d46d6aa4-86ea-4f75-afd5-49513650274b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021121079s STEP: Saw pod success Oct 14 23:31:58.612: INFO: Pod "pod-d46d6aa4-86ea-4f75-afd5-49513650274b" satisfied condition "Succeeded or Failed" Oct 14 23:31:58.614: INFO: Trying to get logs from node leguer-worker2 pod pod-d46d6aa4-86ea-4f75-afd5-49513650274b container test-container: STEP: delete the pod Oct 14 23:31:58.668: INFO: Waiting for pod pod-d46d6aa4-86ea-4f75-afd5-49513650274b to disappear Oct 14 23:31:58.670: INFO: Pod pod-d46d6aa4-86ea-4f75-afd5-49513650274b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:31:58.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7324" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":1982,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:31:58.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 14 23:31:58.743: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-resource-version c4058cc3-c26d-4d69-9fef-32395d067931 2955764 0 2020-10-14 23:31:58 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-14 23:31:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 23:31:58.744: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-3548 /api/v1/namespaces/watch-3548/configmaps/e2e-watch-test-resource-version c4058cc3-c26d-4d69-9fef-32395d067931 2955765 0 2020-10-14 23:31:58 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-14 23:31:58 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:31:58.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3548" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":134,"skipped":2049,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:31:58.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Oct 14 23:31:58.835: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix729695225/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:31:58.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9564" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":135,"skipped":2069,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:31:58.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 14 23:32:05.526: INFO: Successfully updated pod "adopt-release-r65hx" STEP: Checking that the Job readopts the Pod Oct 14 23:32:05.526: INFO: Waiting up to 15m0s for pod "adopt-release-r65hx" in namespace "job-7928" to be "adopted" Oct 14 23:32:05.679: INFO: Pod "adopt-release-r65hx": Phase="Running", Reason="", readiness=true. Elapsed: 152.512227ms Oct 14 23:32:07.684: INFO: Pod "adopt-release-r65hx": Phase="Running", Reason="", readiness=true. Elapsed: 2.157894078s Oct 14 23:32:07.684: INFO: Pod "adopt-release-r65hx" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 14 23:32:08.198: INFO: Successfully updated pod "adopt-release-r65hx" STEP: Checking that the Job releases the Pod Oct 14 23:32:08.198: INFO: Waiting up to 15m0s for pod "adopt-release-r65hx" in namespace "job-7928" to be "released" Oct 14 23:32:08.237: INFO: Pod "adopt-release-r65hx": Phase="Running", Reason="", readiness=true. Elapsed: 38.963721ms Oct 14 23:32:08.237: INFO: Pod "adopt-release-r65hx" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:32:08.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7928" for this suite. • [SLOW TEST:9.396 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":136,"skipped":2091,"failed":0} S ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:32:08.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 14 23:32:08.436: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:32:08.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6583" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":137,"skipped":2092,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:32:08.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-8453 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-8453 STEP: Creating statefulset with conflicting port in namespace statefulset-8453 STEP: Waiting until pod test-pod will start running in namespace statefulset-8453 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8453 Oct 14 23:32:14.796: INFO: Observed stateful pod in namespace: statefulset-8453, name: ss-0, uid: 09abbfea-349f-48af-9654-1ff8202eaeb5, status phase: Pending. Waiting for statefulset controller to delete. Oct 14 23:32:14.915: INFO: Observed stateful pod in namespace: statefulset-8453, name: ss-0, uid: 09abbfea-349f-48af-9654-1ff8202eaeb5, status phase: Failed. Waiting for statefulset controller to delete. Oct 14 23:32:14.965: INFO: Observed stateful pod in namespace: statefulset-8453, name: ss-0, uid: 09abbfea-349f-48af-9654-1ff8202eaeb5, status phase: Failed. Waiting for statefulset controller to delete. Oct 14 23:32:14.970: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8453 STEP: Removing pod with conflicting port in namespace statefulset-8453 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8453 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 23:32:21.100: INFO: Deleting all statefulset in ns statefulset-8453 Oct 14 23:32:21.102: INFO: Scaling statefulset ss to 0 Oct 14 23:32:31.121: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:32:31.123: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:32:31.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8453" for this suite. • [SLOW TEST:22.606 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":138,"skipped":2104,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:32:31.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-rbbd STEP: Creating a pod to test atomic-volume-subpath Oct 14 23:32:31.308: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rbbd" in namespace "subpath-8046" to be "Succeeded or Failed" Oct 14 23:32:31.311: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.382809ms Oct 14 23:32:33.372: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064359642s Oct 14 23:32:35.376: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 4.068499987s Oct 14 23:32:37.381: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 6.073141931s Oct 14 23:32:39.386: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 8.078088973s Oct 14 23:32:41.390: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 10.08235554s Oct 14 23:32:43.394: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 12.086379168s Oct 14 23:32:45.423: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 14.11542985s Oct 14 23:32:47.428: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 16.120123448s Oct 14 23:32:49.433: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 18.125196708s Oct 14 23:32:51.436: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 20.128432431s Oct 14 23:32:53.441: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Running", Reason="", readiness=true. Elapsed: 22.133062706s Oct 14 23:32:55.445: INFO: Pod "pod-subpath-test-downwardapi-rbbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.137137852s STEP: Saw pod success Oct 14 23:32:55.445: INFO: Pod "pod-subpath-test-downwardapi-rbbd" satisfied condition "Succeeded or Failed" Oct 14 23:32:55.448: INFO: Trying to get logs from node leguer-worker2 pod pod-subpath-test-downwardapi-rbbd container test-container-subpath-downwardapi-rbbd: STEP: delete the pod Oct 14 23:32:55.493: INFO: Waiting for pod pod-subpath-test-downwardapi-rbbd to disappear Oct 14 23:32:55.501: INFO: Pod pod-subpath-test-downwardapi-rbbd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-rbbd Oct 14 23:32:55.501: INFO: Deleting pod "pod-subpath-test-downwardapi-rbbd" in namespace "subpath-8046" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:32:55.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8046" for this suite. • [SLOW TEST:24.365 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":139,"skipped":2115,"failed":0} S ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:32:55.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:32:55.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1172" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":140,"skipped":2116,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:32:55.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Oct 14 23:32:59.745: INFO: Pod pod-hostip-c5bc281c-e169-4fd7-848f-8d463637cf96 has hostIP: 172.18.0.17 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:32:59.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9744" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":141,"skipped":2138,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:32:59.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Oct 14 23:32:59.859: INFO: Waiting up to 5m0s for pod "var-expansion-1faaf6d6-dcfe-4b21-8777-aff2923d8009" in namespace "var-expansion-6316" to be "Succeeded or Failed" Oct 14 23:32:59.875: INFO: Pod "var-expansion-1faaf6d6-dcfe-4b21-8777-aff2923d8009": Phase="Pending", Reason="", readiness=false. Elapsed: 15.9205ms Oct 14 23:33:01.881: INFO: Pod "var-expansion-1faaf6d6-dcfe-4b21-8777-aff2923d8009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021650609s Oct 14 23:33:03.885: INFO: Pod "var-expansion-1faaf6d6-dcfe-4b21-8777-aff2923d8009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025560606s STEP: Saw pod success Oct 14 23:33:03.885: INFO: Pod "var-expansion-1faaf6d6-dcfe-4b21-8777-aff2923d8009" satisfied condition "Succeeded or Failed" Oct 14 23:33:03.887: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-1faaf6d6-dcfe-4b21-8777-aff2923d8009 container dapi-container: STEP: delete the pod Oct 14 23:33:03.923: INFO: Waiting for pod var-expansion-1faaf6d6-dcfe-4b21-8777-aff2923d8009 to disappear Oct 14 23:33:03.935: INFO: Pod var-expansion-1faaf6d6-dcfe-4b21-8777-aff2923d8009 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:33:03.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6316" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":142,"skipped":2139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:33:03.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 14 23:33:04.047: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 14 23:33:09.089: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:33:09.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1450" for this suite. • [SLOW TEST:5.272 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":143,"skipped":2166,"failed":0} SSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:33:09.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 14 23:33:20.392: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 23:33:20.400: INFO: Pod pod-with-poststart-http-hook still exists Oct 14 23:33:22.400: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 23:33:22.406: INFO: Pod pod-with-poststart-http-hook still exists Oct 14 23:33:24.400: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 23:33:24.406: INFO: Pod pod-with-poststart-http-hook still exists Oct 14 23:33:26.400: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 23:33:26.406: INFO: Pod pod-with-poststart-http-hook still exists Oct 14 23:33:28.400: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 23:33:28.405: INFO: Pod pod-with-poststart-http-hook still exists Oct 14 23:33:30.400: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 14 23:33:30.404: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:33:30.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1538" for this suite. • [SLOW TEST:21.191 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":144,"skipped":2169,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:33:30.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:33:31.224: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:33:33.285: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315211, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315211, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315211, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315211, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:33:36.353: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:33:46.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4948" for this suite. STEP: Destroying namespace "webhook-4948-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.201 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":145,"skipped":2173,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:33:46.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 14 23:33:46.653: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:33:53.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3368" for this suite. • [SLOW TEST:6.421 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":146,"skipped":2199,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:33:53.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:33:53.134: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 14 23:33:56.111: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8019 create -f -' Oct 14 23:33:59.591: INFO: stderr: "" Oct 14 23:33:59.591: INFO: stdout: "e2e-test-crd-publish-openapi-6230-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 14 23:33:59.591: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8019 delete e2e-test-crd-publish-openapi-6230-crds test-cr' Oct 14 23:33:59.720: INFO: stderr: "" Oct 14 23:33:59.720: INFO: stdout: "e2e-test-crd-publish-openapi-6230-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 14 23:33:59.721: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8019 apply -f -' Oct 14 23:34:00.047: INFO: stderr: "" Oct 14 23:34:00.047: INFO: stdout: "e2e-test-crd-publish-openapi-6230-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 14 23:34:00.047: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8019 delete e2e-test-crd-publish-openapi-6230-crds test-cr' Oct 14 23:34:00.159: INFO: stderr: "" Oct 14 23:34:00.159: INFO: stdout: "e2e-test-crd-publish-openapi-6230-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 14 23:34:00.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6230-crds' Oct 14 23:34:00.455: INFO: stderr: "" Oct 14 23:34:00.455: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6230-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:34:03.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8019" for this suite. • [SLOW TEST:10.408 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":147,"skipped":2217,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:34:03.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 14 23:34:03.587: INFO: >>> kubeConfig: /root/.kube/config Oct 14 23:34:06.568: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:34:17.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8792" for this suite. • [SLOW TEST:13.985 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":148,"skipped":2220,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:34:17.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5324.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5324.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5324.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5324.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5324.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 49.166.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.166.49_udp@PTR;check="$$(dig +tcp +noall +answer +search 49.166.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.166.49_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5324.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5324.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5324.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5324.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5324.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5324.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5324.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5324.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5324.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 49.166.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.166.49_udp@PTR;check="$$(dig +tcp +noall +answer +search 49.166.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.166.49_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 23:34:23.617: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:23.621: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:23.624: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:23.627: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:23.649: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:23.653: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:23.656: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:23.659: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:23.678: INFO: Lookups using dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b failed for: [wheezy_udp@dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_udp@dns-test-service.dns-5324.svc.cluster.local jessie_tcp@dns-test-service.dns-5324.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local] Oct 14 23:34:28.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:28.688: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:28.691: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:28.694: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:28.719: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:28.722: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:28.725: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:28.728: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:28.740: INFO: Lookups using dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b failed for: [wheezy_udp@dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_udp@dns-test-service.dns-5324.svc.cluster.local jessie_tcp@dns-test-service.dns-5324.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local] Oct 14 23:34:33.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:33.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:33.691: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:33.694: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:33.718: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:33.721: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:33.724: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:33.726: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:33.744: INFO: Lookups using dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b failed for: [wheezy_udp@dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_udp@dns-test-service.dns-5324.svc.cluster.local jessie_tcp@dns-test-service.dns-5324.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local] Oct 14 23:34:38.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:38.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:38.690: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:38.693: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:38.713: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:38.715: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:38.718: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:38.721: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:38.772: INFO: Lookups using dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b failed for: [wheezy_udp@dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_udp@dns-test-service.dns-5324.svc.cluster.local jessie_tcp@dns-test-service.dns-5324.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local] Oct 14 23:34:43.682: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:43.684: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:43.686: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:43.689: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:43.708: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:43.711: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:43.714: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:43.717: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:43.735: INFO: Lookups using dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b failed for: [wheezy_udp@dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_udp@dns-test-service.dns-5324.svc.cluster.local jessie_tcp@dns-test-service.dns-5324.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local] Oct 14 23:34:48.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:48.687: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:48.690: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:48.694: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:48.718: INFO: Unable to read jessie_udp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:48.722: INFO: Unable to read jessie_tcp@dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:48.726: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:48.729: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local from pod dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b: the server could not find the requested resource (get pods dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b) Oct 14 23:34:48.749: INFO: Lookups using dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b failed for: [wheezy_udp@dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@dns-test-service.dns-5324.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_udp@dns-test-service.dns-5324.svc.cluster.local jessie_tcp@dns-test-service.dns-5324.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5324.svc.cluster.local] Oct 14 23:34:53.774: INFO: DNS probes using dns-5324/dns-test-b59274ef-6bf7-406c-ab1f-5dba304b945b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:34:54.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5324" for this suite. • [SLOW TEST:37.332 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":149,"skipped":2241,"failed":0} S ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:34:54.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2937 Oct 14 23:34:58.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2937 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 14 23:34:59.085: INFO: stderr: "I1014 23:34:58.974307 1694 log.go:181] (0xc000b3d290) (0xc000e8c960) Create stream\nI1014 23:34:58.974374 1694 log.go:181] (0xc000b3d290) (0xc000e8c960) Stream added, broadcasting: 1\nI1014 23:34:58.977981 1694 log.go:181] (0xc000b3d290) Reply frame received for 1\nI1014 23:34:58.978023 1694 log.go:181] (0xc000b3d290) (0xc0007f8140) Create stream\nI1014 23:34:58.978049 1694 log.go:181] (0xc000b3d290) (0xc0007f8140) Stream added, broadcasting: 3\nI1014 23:34:58.978910 1694 log.go:181] (0xc000b3d290) Reply frame received for 3\nI1014 23:34:58.978945 1694 log.go:181] (0xc000b3d290) (0xc0007f81e0) Create stream\nI1014 23:34:58.978959 1694 log.go:181] (0xc000b3d290) (0xc0007f81e0) Stream added, broadcasting: 5\nI1014 23:34:58.979759 1694 log.go:181] (0xc000b3d290) Reply frame received for 5\nI1014 23:34:59.071065 1694 log.go:181] (0xc000b3d290) Data frame received for 5\nI1014 23:34:59.071091 1694 log.go:181] (0xc0007f81e0) (5) Data frame handling\nI1014 23:34:59.071103 1694 log.go:181] (0xc0007f81e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1014 23:34:59.076130 1694 log.go:181] (0xc000b3d290) Data frame received for 3\nI1014 23:34:59.076159 1694 log.go:181] (0xc0007f8140) (3) Data frame handling\nI1014 23:34:59.076191 1694 log.go:181] (0xc0007f8140) (3) Data frame sent\nI1014 23:34:59.076495 1694 log.go:181] (0xc000b3d290) Data frame received for 3\nI1014 23:34:59.076517 1694 log.go:181] (0xc0007f8140) (3) Data frame handling\nI1014 23:34:59.076548 1694 log.go:181] (0xc000b3d290) Data frame received for 5\nI1014 23:34:59.076572 1694 log.go:181] (0xc0007f81e0) (5) Data frame handling\nI1014 23:34:59.078942 1694 log.go:181] (0xc000b3d290) Data frame received for 1\nI1014 23:34:59.078966 1694 log.go:181] (0xc000e8c960) (1) Data frame handling\nI1014 23:34:59.078986 1694 log.go:181] (0xc000e8c960) (1) Data frame sent\nI1014 23:34:59.079190 1694 log.go:181] (0xc000b3d290) (0xc000e8c960) Stream removed, broadcasting: 1\nI1014 23:34:59.079278 1694 log.go:181] (0xc000b3d290) Go away received\nI1014 23:34:59.079823 1694 log.go:181] (0xc000b3d290) (0xc000e8c960) Stream removed, broadcasting: 1\nI1014 23:34:59.079844 1694 log.go:181] (0xc000b3d290) (0xc0007f8140) Stream removed, broadcasting: 3\nI1014 23:34:59.079856 1694 log.go:181] (0xc000b3d290) (0xc0007f81e0) Stream removed, broadcasting: 5\n" Oct 14 23:34:59.085: INFO: stdout: "iptables" Oct 14 23:34:59.085: INFO: proxyMode: iptables Oct 14 23:34:59.091: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 23:34:59.115: INFO: Pod kube-proxy-mode-detector still exists Oct 14 23:35:01.115: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 23:35:01.120: INFO: Pod kube-proxy-mode-detector still exists Oct 14 23:35:03.115: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 14 23:35:03.119: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-2937 STEP: creating replication controller affinity-nodeport-timeout in namespace services-2937 I1014 23:35:03.198708 7 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2937, replica count: 3 I1014 23:35:06.249175 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:35:09.249459 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:35:12.249668 7 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 23:35:12.261: INFO: Creating new exec pod Oct 14 23:35:17.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2937 execpod-affinityxjg82 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Oct 14 23:35:17.585: INFO: stderr: "I1014 23:35:17.470022 1713 log.go:181] (0xc000bf4000) (0xc000bf8960) Create stream\nI1014 23:35:17.470109 1713 log.go:181] (0xc000bf4000) (0xc000bf8960) Stream added, broadcasting: 1\nI1014 23:35:17.477887 1713 log.go:181] (0xc000bf4000) Reply frame received for 1\nI1014 23:35:17.477966 1713 log.go:181] (0xc000bf4000) (0xc000bf8000) Create stream\nI1014 23:35:17.477992 1713 log.go:181] (0xc000bf4000) (0xc000bf8000) Stream added, broadcasting: 3\nI1014 23:35:17.479031 1713 log.go:181] (0xc000bf4000) Reply frame received for 3\nI1014 23:35:17.479070 1713 log.go:181] (0xc000bf4000) (0xc000bf8140) Create stream\nI1014 23:35:17.479081 1713 log.go:181] (0xc000bf4000) (0xc000bf8140) Stream added, broadcasting: 5\nI1014 23:35:17.479954 1713 log.go:181] (0xc000bf4000) Reply frame received for 5\nI1014 23:35:17.576679 1713 log.go:181] (0xc000bf4000) Data frame received for 5\nI1014 23:35:17.576709 1713 log.go:181] (0xc000bf8140) (5) Data frame handling\nI1014 23:35:17.576724 1713 log.go:181] (0xc000bf8140) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nI1014 23:35:17.577085 1713 log.go:181] (0xc000bf4000) Data frame received for 5\nI1014 23:35:17.577111 1713 log.go:181] (0xc000bf8140) (5) Data frame handling\nI1014 23:35:17.577132 1713 log.go:181] (0xc000bf8140) (5) Data frame sent\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI1014 23:35:17.577625 1713 log.go:181] (0xc000bf4000) Data frame received for 3\nI1014 23:35:17.577656 1713 log.go:181] (0xc000bf4000) Data frame received for 5\nI1014 23:35:17.577693 1713 log.go:181] (0xc000bf8140) (5) Data frame handling\nI1014 23:35:17.577729 1713 log.go:181] (0xc000bf8000) (3) Data frame handling\nI1014 23:35:17.579399 1713 log.go:181] (0xc000bf4000) Data frame received for 1\nI1014 23:35:17.579416 1713 log.go:181] (0xc000bf8960) (1) Data frame handling\nI1014 23:35:17.579423 1713 log.go:181] (0xc000bf8960) (1) Data frame sent\nI1014 23:35:17.579433 1713 log.go:181] (0xc000bf4000) (0xc000bf8960) Stream removed, broadcasting: 1\nI1014 23:35:17.579440 1713 log.go:181] (0xc000bf4000) Go away received\nI1014 23:35:17.579932 1713 log.go:181] (0xc000bf4000) (0xc000bf8960) Stream removed, broadcasting: 1\nI1014 23:35:17.579975 1713 log.go:181] (0xc000bf4000) (0xc000bf8000) Stream removed, broadcasting: 3\nI1014 23:35:17.579988 1713 log.go:181] (0xc000bf4000) (0xc000bf8140) Stream removed, broadcasting: 5\n" Oct 14 23:35:17.585: INFO: stdout: "" Oct 14 23:35:17.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2937 execpod-affinityxjg82 -- /bin/sh -x -c nc -zv -t -w 2 10.99.224.215 80' Oct 14 23:35:17.799: INFO: stderr: "I1014 23:35:17.716962 1731 log.go:181] (0xc000e30f20) (0xc000f085a0) Create stream\nI1014 23:35:17.717028 1731 log.go:181] (0xc000e30f20) (0xc000f085a0) Stream added, broadcasting: 1\nI1014 23:35:17.722804 1731 log.go:181] (0xc000e30f20) Reply frame received for 1\nI1014 23:35:17.722859 1731 log.go:181] (0xc000e30f20) (0xc000c9c0a0) Create stream\nI1014 23:35:17.722872 1731 log.go:181] (0xc000e30f20) (0xc000c9c0a0) Stream added, broadcasting: 3\nI1014 23:35:17.724036 1731 log.go:181] (0xc000e30f20) Reply frame received for 3\nI1014 23:35:17.724099 1731 log.go:181] (0xc000e30f20) (0xc000e28000) Create stream\nI1014 23:35:17.724115 1731 log.go:181] (0xc000e30f20) (0xc000e28000) Stream added, broadcasting: 5\nI1014 23:35:17.725414 1731 log.go:181] (0xc000e30f20) Reply frame received for 5\nI1014 23:35:17.791806 1731 log.go:181] (0xc000e30f20) Data frame received for 3\nI1014 23:35:17.791842 1731 log.go:181] (0xc000c9c0a0) (3) Data frame handling\nI1014 23:35:17.792221 1731 log.go:181] (0xc000e30f20) Data frame received for 5\nI1014 23:35:17.792256 1731 log.go:181] (0xc000e28000) (5) Data frame handling\nI1014 23:35:17.792286 1731 log.go:181] (0xc000e28000) (5) Data frame sent\nI1014 23:35:17.792306 1731 log.go:181] (0xc000e30f20) Data frame received for 5\n+ nc -zv -t -w 2 10.99.224.215 80\nConnection to 10.99.224.215 80 port [tcp/http] succeeded!\nI1014 23:35:17.792331 1731 log.go:181] (0xc000e28000) (5) Data frame handling\nI1014 23:35:17.793828 1731 log.go:181] (0xc000e30f20) Data frame received for 1\nI1014 23:35:17.793870 1731 log.go:181] (0xc000f085a0) (1) Data frame handling\nI1014 23:35:17.793896 1731 log.go:181] (0xc000f085a0) (1) Data frame sent\nI1014 23:35:17.793914 1731 log.go:181] (0xc000e30f20) (0xc000f085a0) Stream removed, broadcasting: 1\nI1014 23:35:17.793940 1731 log.go:181] (0xc000e30f20) Go away received\nI1014 23:35:17.794237 1731 log.go:181] (0xc000e30f20) (0xc000f085a0) Stream removed, broadcasting: 1\nI1014 23:35:17.794255 1731 log.go:181] (0xc000e30f20) (0xc000c9c0a0) Stream removed, broadcasting: 3\nI1014 23:35:17.794265 1731 log.go:181] (0xc000e30f20) (0xc000e28000) Stream removed, broadcasting: 5\n" Oct 14 23:35:17.799: INFO: stdout: "" Oct 14 23:35:17.799: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2937 execpod-affinityxjg82 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 31195' Oct 14 23:35:18.024: INFO: stderr: "I1014 23:35:17.942667 1749 log.go:181] (0xc000f011e0) (0xc0005c88c0) Create stream\nI1014 23:35:17.942732 1749 log.go:181] (0xc000f011e0) (0xc0005c88c0) Stream added, broadcasting: 1\nI1014 23:35:17.948198 1749 log.go:181] (0xc000f011e0) Reply frame received for 1\nI1014 23:35:17.948234 1749 log.go:181] (0xc000f011e0) (0xc000ad00a0) Create stream\nI1014 23:35:17.948245 1749 log.go:181] (0xc000f011e0) (0xc000ad00a0) Stream added, broadcasting: 3\nI1014 23:35:17.949392 1749 log.go:181] (0xc000f011e0) Reply frame received for 3\nI1014 23:35:17.949424 1749 log.go:181] (0xc000f011e0) (0xc0005c8000) Create stream\nI1014 23:35:17.949434 1749 log.go:181] (0xc000f011e0) (0xc0005c8000) Stream added, broadcasting: 5\nI1014 23:35:17.950278 1749 log.go:181] (0xc000f011e0) Reply frame received for 5\nI1014 23:35:18.015158 1749 log.go:181] (0xc000f011e0) Data frame received for 5\nI1014 23:35:18.015202 1749 log.go:181] (0xc0005c8000) (5) Data frame handling\nI1014 23:35:18.015239 1749 log.go:181] (0xc0005c8000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.18 31195\nI1014 23:35:18.015390 1749 log.go:181] (0xc000f011e0) Data frame received for 5\nI1014 23:35:18.015415 1749 log.go:181] (0xc0005c8000) (5) Data frame handling\nI1014 23:35:18.015447 1749 log.go:181] (0xc0005c8000) (5) Data frame sent\nConnection to 172.18.0.18 31195 port [tcp/31195] succeeded!\nI1014 23:35:18.015774 1749 log.go:181] (0xc000f011e0) Data frame received for 5\nI1014 23:35:18.015797 1749 log.go:181] (0xc0005c8000) (5) Data frame handling\nI1014 23:35:18.015818 1749 log.go:181] (0xc000f011e0) Data frame received for 3\nI1014 23:35:18.015837 1749 log.go:181] (0xc000ad00a0) (3) Data frame handling\nI1014 23:35:18.017637 1749 log.go:181] (0xc000f011e0) Data frame received for 1\nI1014 23:35:18.017682 1749 log.go:181] (0xc0005c88c0) (1) Data frame handling\nI1014 23:35:18.017719 1749 log.go:181] (0xc0005c88c0) (1) Data frame sent\nI1014 23:35:18.017750 1749 log.go:181] (0xc000f011e0) (0xc0005c88c0) Stream removed, broadcasting: 1\nI1014 23:35:18.017788 1749 log.go:181] (0xc000f011e0) Go away received\nI1014 23:35:18.018269 1749 log.go:181] (0xc000f011e0) (0xc0005c88c0) Stream removed, broadcasting: 1\nI1014 23:35:18.018298 1749 log.go:181] (0xc000f011e0) (0xc000ad00a0) Stream removed, broadcasting: 3\nI1014 23:35:18.018311 1749 log.go:181] (0xc000f011e0) (0xc0005c8000) Stream removed, broadcasting: 5\n" Oct 14 23:35:18.024: INFO: stdout: "" Oct 14 23:35:18.024: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2937 execpod-affinityxjg82 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 31195' Oct 14 23:35:18.239: INFO: stderr: "I1014 23:35:18.155589 1767 log.go:181] (0xc000d97130) (0xc000d8e6e0) Create stream\nI1014 23:35:18.155649 1767 log.go:181] (0xc000d97130) (0xc000d8e6e0) Stream added, broadcasting: 1\nI1014 23:35:18.158252 1767 log.go:181] (0xc000d97130) Reply frame received for 1\nI1014 23:35:18.158296 1767 log.go:181] (0xc000d97130) (0xc000b8c000) Create stream\nI1014 23:35:18.158319 1767 log.go:181] (0xc000d97130) (0xc000b8c000) Stream added, broadcasting: 3\nI1014 23:35:18.159423 1767 log.go:181] (0xc000d97130) Reply frame received for 3\nI1014 23:35:18.159470 1767 log.go:181] (0xc000d97130) (0xc0005683c0) Create stream\nI1014 23:35:18.159497 1767 log.go:181] (0xc000d97130) (0xc0005683c0) Stream added, broadcasting: 5\nI1014 23:35:18.160418 1767 log.go:181] (0xc000d97130) Reply frame received for 5\nI1014 23:35:18.230858 1767 log.go:181] (0xc000d97130) Data frame received for 5\nI1014 23:35:18.230893 1767 log.go:181] (0xc0005683c0) (5) Data frame handling\nI1014 23:35:18.230926 1767 log.go:181] (0xc0005683c0) (5) Data frame sent\nI1014 23:35:18.230945 1767 log.go:181] (0xc000d97130) Data frame received for 5\nI1014 23:35:18.230962 1767 log.go:181] (0xc0005683c0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.17 31195\nConnection to 172.18.0.17 31195 port [tcp/31195] succeeded!\nI1014 23:35:18.231007 1767 log.go:181] (0xc0005683c0) (5) Data frame sent\nI1014 23:35:18.231369 1767 log.go:181] (0xc000d97130) Data frame received for 5\nI1014 23:35:18.231424 1767 log.go:181] (0xc0005683c0) (5) Data frame handling\nI1014 23:35:18.231464 1767 log.go:181] (0xc000d97130) Data frame received for 3\nI1014 23:35:18.231489 1767 log.go:181] (0xc000b8c000) (3) Data frame handling\nI1014 23:35:18.233370 1767 log.go:181] (0xc000d97130) Data frame received for 1\nI1014 23:35:18.233399 1767 log.go:181] (0xc000d8e6e0) (1) Data frame handling\nI1014 23:35:18.233413 1767 log.go:181] (0xc000d8e6e0) (1) Data frame sent\nI1014 23:35:18.233426 1767 log.go:181] (0xc000d97130) (0xc000d8e6e0) Stream removed, broadcasting: 1\nI1014 23:35:18.233888 1767 log.go:181] (0xc000d97130) (0xc000d8e6e0) Stream removed, broadcasting: 1\nI1014 23:35:18.233933 1767 log.go:181] (0xc000d97130) Go away received\nI1014 23:35:18.234000 1767 log.go:181] (0xc000d97130) (0xc000b8c000) Stream removed, broadcasting: 3\nI1014 23:35:18.234043 1767 log.go:181] (0xc000d97130) (0xc0005683c0) Stream removed, broadcasting: 5\n" Oct 14 23:35:18.239: INFO: stdout: "" Oct 14 23:35:18.239: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2937 execpod-affinityxjg82 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.18:31195/ ; done' Oct 14 23:35:18.528: INFO: stderr: "I1014 23:35:18.365961 1785 log.go:181] (0xc00056f4a0) (0xc0005bec80) Create stream\nI1014 23:35:18.366021 1785 log.go:181] (0xc00056f4a0) (0xc0005bec80) Stream added, broadcasting: 1\nI1014 23:35:18.368607 1785 log.go:181] (0xc00056f4a0) Reply frame received for 1\nI1014 23:35:18.368640 1785 log.go:181] (0xc00056f4a0) (0xc0005bed20) Create stream\nI1014 23:35:18.368652 1785 log.go:181] (0xc00056f4a0) (0xc0005bed20) Stream added, broadcasting: 3\nI1014 23:35:18.370017 1785 log.go:181] (0xc00056f4a0) Reply frame received for 3\nI1014 23:35:18.370087 1785 log.go:181] (0xc00056f4a0) (0xc000622500) Create stream\nI1014 23:35:18.370109 1785 log.go:181] (0xc00056f4a0) (0xc000622500) Stream added, broadcasting: 5\nI1014 23:35:18.371132 1785 log.go:181] (0xc00056f4a0) Reply frame received for 5\nI1014 23:35:18.435225 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.435249 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.435274 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.435309 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.435319 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.435331 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.438185 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.438198 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.438204 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.438621 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.438633 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.438639 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.438657 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.438662 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.438667 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.442523 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.442540 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.442552 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.443026 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.443048 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.443078 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.443146 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.443171 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.443186 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.447307 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.447327 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.447348 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.447780 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.447804 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.447815 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.447825 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.447831 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.447837 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.452246 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.452267 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.452290 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.453005 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.453026 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.453044 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.453053 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.453064 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.453071 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.458497 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.458517 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.458529 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.459300 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.459330 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.459344 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.459382 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.459405 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.459421 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.464220 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.464244 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.464262 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.465029 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.465060 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.465083 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.465178 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.465203 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.465219 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.469530 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.469555 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.469571 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.470435 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.470468 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.470481 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.470507 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.470528 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.470547 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.475082 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.475120 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.475151 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.475578 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.475591 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.475599 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.475610 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.475617 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.475626 1785 log.go:181] (0xc000622500) (5) Data frame sent\nI1014 23:35:18.475631 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.475634 1785 log.go:181] (0xc000622500) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.475647 1785 log.go:181] (0xc000622500) (5) Data frame sent\nI1014 23:35:18.479565 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.479579 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.479586 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.480110 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.480137 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.480174 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.480194 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.480209 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.480229 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.483408 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.483430 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.483444 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.483807 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.483831 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.483856 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.483873 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.483889 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.483909 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.489294 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.489313 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.489322 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.490169 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.490182 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.490188 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.490208 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.490228 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.490244 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.495238 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.495258 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.495266 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.495908 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.495920 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.495927 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.495963 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.495988 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.496008 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.500753 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.500768 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.500778 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.501796 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.501812 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.501822 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.501834 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.501853 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.501862 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.506237 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.506274 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.506305 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.506789 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.506829 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.506871 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.506901 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.506932 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.506963 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.511307 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.511336 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.511362 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.512233 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.512260 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.512307 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.512350 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.512393 1785 log.go:181] (0xc000622500) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.512433 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.519552 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.519589 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.519618 1785 log.go:181] (0xc0005bed20) (3) Data frame sent\nI1014 23:35:18.520059 1785 log.go:181] (0xc00056f4a0) Data frame received for 5\nI1014 23:35:18.520076 1785 log.go:181] (0xc000622500) (5) Data frame handling\nI1014 23:35:18.520215 1785 log.go:181] (0xc00056f4a0) Data frame received for 3\nI1014 23:35:18.520236 1785 log.go:181] (0xc0005bed20) (3) Data frame handling\nI1014 23:35:18.521926 1785 log.go:181] (0xc00056f4a0) Data frame received for 1\nI1014 23:35:18.521945 1785 log.go:181] (0xc0005bec80) (1) Data frame handling\nI1014 23:35:18.521962 1785 log.go:181] (0xc0005bec80) (1) Data frame sent\nI1014 23:35:18.522050 1785 log.go:181] (0xc00056f4a0) (0xc0005bec80) Stream removed, broadcasting: 1\nI1014 23:35:18.522160 1785 log.go:181] (0xc00056f4a0) Go away received\nI1014 23:35:18.522456 1785 log.go:181] (0xc00056f4a0) (0xc0005bec80) Stream removed, broadcasting: 1\nI1014 23:35:18.522474 1785 log.go:181] (0xc00056f4a0) (0xc0005bed20) Stream removed, broadcasting: 3\nI1014 23:35:18.522483 1785 log.go:181] (0xc00056f4a0) (0xc000622500) Stream removed, broadcasting: 5\n" Oct 14 23:35:18.528: INFO: stdout: "\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk\naffinity-nodeport-timeout-gksfk" Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Received response from host: affinity-nodeport-timeout-gksfk Oct 14 23:35:18.528: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2937 execpod-affinityxjg82 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.18:31195/' Oct 14 23:35:18.741: INFO: stderr: "I1014 23:35:18.656654 1803 log.go:181] (0xc0009274a0) (0xc000e0ebe0) Create stream\nI1014 23:35:18.656708 1803 log.go:181] (0xc0009274a0) (0xc000e0ebe0) Stream added, broadcasting: 1\nI1014 23:35:18.659696 1803 log.go:181] (0xc0009274a0) Reply frame received for 1\nI1014 23:35:18.659742 1803 log.go:181] (0xc0009274a0) (0xc000e0e000) Create stream\nI1014 23:35:18.659756 1803 log.go:181] (0xc0009274a0) (0xc000e0e000) Stream added, broadcasting: 3\nI1014 23:35:18.660800 1803 log.go:181] (0xc0009274a0) Reply frame received for 3\nI1014 23:35:18.660954 1803 log.go:181] (0xc0009274a0) (0xc000866000) Create stream\nI1014 23:35:18.660977 1803 log.go:181] (0xc0009274a0) (0xc000866000) Stream added, broadcasting: 5\nI1014 23:35:18.661805 1803 log.go:181] (0xc0009274a0) Reply frame received for 5\nI1014 23:35:18.731357 1803 log.go:181] (0xc0009274a0) Data frame received for 5\nI1014 23:35:18.731398 1803 log.go:181] (0xc000866000) (5) Data frame handling\nI1014 23:35:18.731414 1803 log.go:181] (0xc000866000) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:18.734773 1803 log.go:181] (0xc0009274a0) Data frame received for 5\nI1014 23:35:18.734795 1803 log.go:181] (0xc000866000) (5) Data frame handling\nI1014 23:35:18.734816 1803 log.go:181] (0xc0009274a0) Data frame received for 3\nI1014 23:35:18.734825 1803 log.go:181] (0xc000e0e000) (3) Data frame handling\nI1014 23:35:18.734834 1803 log.go:181] (0xc000e0e000) (3) Data frame sent\nI1014 23:35:18.734841 1803 log.go:181] (0xc0009274a0) Data frame received for 3\nI1014 23:35:18.734846 1803 log.go:181] (0xc000e0e000) (3) Data frame handling\nI1014 23:35:18.735992 1803 log.go:181] (0xc0009274a0) Data frame received for 1\nI1014 23:35:18.736067 1803 log.go:181] (0xc000e0ebe0) (1) Data frame handling\nI1014 23:35:18.736088 1803 log.go:181] (0xc000e0ebe0) (1) Data frame sent\nI1014 23:35:18.736100 1803 log.go:181] (0xc0009274a0) (0xc000e0ebe0) Stream removed, broadcasting: 1\nI1014 23:35:18.736110 1803 log.go:181] (0xc0009274a0) Go away received\nI1014 23:35:18.736497 1803 log.go:181] (0xc0009274a0) (0xc000e0ebe0) Stream removed, broadcasting: 1\nI1014 23:35:18.736518 1803 log.go:181] (0xc0009274a0) (0xc000e0e000) Stream removed, broadcasting: 3\nI1014 23:35:18.736524 1803 log.go:181] (0xc0009274a0) (0xc000866000) Stream removed, broadcasting: 5\n" Oct 14 23:35:18.741: INFO: stdout: "affinity-nodeport-timeout-gksfk" Oct 14 23:35:33.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-2937 execpod-affinityxjg82 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.18:31195/' Oct 14 23:35:33.965: INFO: stderr: "I1014 23:35:33.869036 1821 log.go:181] (0xc0006c11e0) (0xc000656b40) Create stream\nI1014 23:35:33.869164 1821 log.go:181] (0xc0006c11e0) (0xc000656b40) Stream added, broadcasting: 1\nI1014 23:35:33.873910 1821 log.go:181] (0xc0006c11e0) Reply frame received for 1\nI1014 23:35:33.873963 1821 log.go:181] (0xc0006c11e0) (0xc000656000) Create stream\nI1014 23:35:33.873985 1821 log.go:181] (0xc0006c11e0) (0xc000656000) Stream added, broadcasting: 3\nI1014 23:35:33.874842 1821 log.go:181] (0xc0006c11e0) Reply frame received for 3\nI1014 23:35:33.874865 1821 log.go:181] (0xc0006c11e0) (0xc000394b40) Create stream\nI1014 23:35:33.874871 1821 log.go:181] (0xc0006c11e0) (0xc000394b40) Stream added, broadcasting: 5\nI1014 23:35:33.875629 1821 log.go:181] (0xc0006c11e0) Reply frame received for 5\nI1014 23:35:33.952827 1821 log.go:181] (0xc0006c11e0) Data frame received for 5\nI1014 23:35:33.952939 1821 log.go:181] (0xc000394b40) (5) Data frame handling\nI1014 23:35:33.952960 1821 log.go:181] (0xc000394b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31195/\nI1014 23:35:33.957146 1821 log.go:181] (0xc0006c11e0) Data frame received for 3\nI1014 23:35:33.957177 1821 log.go:181] (0xc000656000) (3) Data frame handling\nI1014 23:35:33.957203 1821 log.go:181] (0xc000656000) (3) Data frame sent\nI1014 23:35:33.957916 1821 log.go:181] (0xc0006c11e0) Data frame received for 5\nI1014 23:35:33.957949 1821 log.go:181] (0xc000394b40) (5) Data frame handling\nI1014 23:35:33.957975 1821 log.go:181] (0xc0006c11e0) Data frame received for 3\nI1014 23:35:33.958000 1821 log.go:181] (0xc000656000) (3) Data frame handling\nI1014 23:35:33.959297 1821 log.go:181] (0xc0006c11e0) Data frame received for 1\nI1014 23:35:33.959315 1821 log.go:181] (0xc000656b40) (1) Data frame handling\nI1014 23:35:33.959324 1821 log.go:181] (0xc000656b40) (1) Data frame sent\nI1014 23:35:33.959339 1821 log.go:181] (0xc0006c11e0) (0xc000656b40) Stream removed, broadcasting: 1\nI1014 23:35:33.959376 1821 log.go:181] (0xc0006c11e0) Go away received\nI1014 23:35:33.959714 1821 log.go:181] (0xc0006c11e0) (0xc000656b40) Stream removed, broadcasting: 1\nI1014 23:35:33.959733 1821 log.go:181] (0xc0006c11e0) (0xc000656000) Stream removed, broadcasting: 3\nI1014 23:35:33.959741 1821 log.go:181] (0xc0006c11e0) (0xc000394b40) Stream removed, broadcasting: 5\n" Oct 14 23:35:33.965: INFO: stdout: "affinity-nodeport-timeout-lk4sl" Oct 14 23:35:33.965: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-2937, will wait for the garbage collector to delete the pods Oct 14 23:35:34.090: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.52232ms Oct 14 23:35:34.590: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.237052ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:35:50.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2937" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:55.680 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":150,"skipped":2242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:35:50.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ff47f690-3f2d-41cd-beb2-e402d1cfe496 STEP: Creating a pod to test consume configMaps Oct 14 23:35:50.575: INFO: Waiting up to 5m0s for pod "pod-configmaps-59360df7-b8a9-4daf-8e67-8cf686a73f28" in namespace "configmap-9279" to be "Succeeded or Failed" Oct 14 23:35:50.584: INFO: Pod "pod-configmaps-59360df7-b8a9-4daf-8e67-8cf686a73f28": Phase="Pending", Reason="", readiness=false. Elapsed: 9.077713ms Oct 14 23:35:52.588: INFO: Pod "pod-configmaps-59360df7-b8a9-4daf-8e67-8cf686a73f28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012936224s Oct 14 23:35:54.592: INFO: Pod "pod-configmaps-59360df7-b8a9-4daf-8e67-8cf686a73f28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017204964s STEP: Saw pod success Oct 14 23:35:54.592: INFO: Pod "pod-configmaps-59360df7-b8a9-4daf-8e67-8cf686a73f28" satisfied condition "Succeeded or Failed" Oct 14 23:35:54.595: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-59360df7-b8a9-4daf-8e67-8cf686a73f28 container configmap-volume-test: STEP: delete the pod Oct 14 23:35:54.633: INFO: Waiting for pod pod-configmaps-59360df7-b8a9-4daf-8e67-8cf686a73f28 to disappear Oct 14 23:35:54.650: INFO: Pod pod-configmaps-59360df7-b8a9-4daf-8e67-8cf686a73f28 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:35:54.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9279" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2273,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:35:54.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 14 23:35:54.755: INFO: Waiting up to 5m0s for pod "pod-05983ad8-60b5-45f0-b4cc-e2e952412397" in namespace "emptydir-7028" to be "Succeeded or Failed" Oct 14 23:35:54.757: INFO: Pod "pod-05983ad8-60b5-45f0-b4cc-e2e952412397": Phase="Pending", Reason="", readiness=false. Elapsed: 1.995604ms Oct 14 23:35:56.785: INFO: Pod "pod-05983ad8-60b5-45f0-b4cc-e2e952412397": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02995764s Oct 14 23:35:58.789: INFO: Pod "pod-05983ad8-60b5-45f0-b4cc-e2e952412397": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033913576s STEP: Saw pod success Oct 14 23:35:58.789: INFO: Pod "pod-05983ad8-60b5-45f0-b4cc-e2e952412397" satisfied condition "Succeeded or Failed" Oct 14 23:35:58.792: INFO: Trying to get logs from node leguer-worker pod pod-05983ad8-60b5-45f0-b4cc-e2e952412397 container test-container: STEP: delete the pod Oct 14 23:35:58.859: INFO: Waiting for pod pod-05983ad8-60b5-45f0-b4cc-e2e952412397 to disappear Oct 14 23:35:58.869: INFO: Pod pod-05983ad8-60b5-45f0-b4cc-e2e952412397 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:35:58.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7028" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":152,"skipped":2284,"failed":0} ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:35:58.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:35:58.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e4b0e574-8323-4b42-a8d5-1861f39838b8" in namespace "downward-api-1711" to be "Succeeded or Failed" Oct 14 23:35:58.990: INFO: Pod "downwardapi-volume-e4b0e574-8323-4b42-a8d5-1861f39838b8": Phase="Pending", Reason="", readiness=false. Elapsed: 16.070712ms Oct 14 23:36:01.103: INFO: Pod "downwardapi-volume-e4b0e574-8323-4b42-a8d5-1861f39838b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129750821s Oct 14 23:36:03.127: INFO: Pod "downwardapi-volume-e4b0e574-8323-4b42-a8d5-1861f39838b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.153415191s STEP: Saw pod success Oct 14 23:36:03.127: INFO: Pod "downwardapi-volume-e4b0e574-8323-4b42-a8d5-1861f39838b8" satisfied condition "Succeeded or Failed" Oct 14 23:36:03.131: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-e4b0e574-8323-4b42-a8d5-1861f39838b8 container client-container: STEP: delete the pod Oct 14 23:36:03.155: INFO: Waiting for pod downwardapi-volume-e4b0e574-8323-4b42-a8d5-1861f39838b8 to disappear Oct 14 23:36:03.175: INFO: Pod downwardapi-volume-e4b0e574-8323-4b42-a8d5-1861f39838b8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:36:03.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1711" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":153,"skipped":2284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:36:03.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:36:03.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-124" for this suite. STEP: Destroying namespace "nspatchtest-7c04b10d-d186-4ec1-91ed-1e2919aa1e48-3689" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":154,"skipped":2309,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:36:03.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:36:03.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9cca2c02-c2f4-4383-ba8d-b1ceb8841972" in namespace "projected-1428" to be "Succeeded or Failed" Oct 14 23:36:03.469: INFO: Pod "downwardapi-volume-9cca2c02-c2f4-4383-ba8d-b1ceb8841972": Phase="Pending", Reason="", readiness=false. Elapsed: 25.435875ms Oct 14 23:36:05.472: INFO: Pod "downwardapi-volume-9cca2c02-c2f4-4383-ba8d-b1ceb8841972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028453984s Oct 14 23:36:07.476: INFO: Pod "downwardapi-volume-9cca2c02-c2f4-4383-ba8d-b1ceb8841972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032797273s STEP: Saw pod success Oct 14 23:36:07.477: INFO: Pod "downwardapi-volume-9cca2c02-c2f4-4383-ba8d-b1ceb8841972" satisfied condition "Succeeded or Failed" Oct 14 23:36:07.479: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-9cca2c02-c2f4-4383-ba8d-b1ceb8841972 container client-container: STEP: delete the pod Oct 14 23:36:07.508: INFO: Waiting for pod downwardapi-volume-9cca2c02-c2f4-4383-ba8d-b1ceb8841972 to disappear Oct 14 23:36:07.564: INFO: Pod downwardapi-volume-9cca2c02-c2f4-4383-ba8d-b1ceb8841972 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:36:07.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1428" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2310,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:36:07.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:36:07.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7befdc5-2544-4be5-86c7-5656ae95f516" in namespace "projected-2244" to be "Succeeded or Failed" Oct 14 23:36:07.650: INFO: Pod "downwardapi-volume-b7befdc5-2544-4be5-86c7-5656ae95f516": Phase="Pending", Reason="", readiness=false. Elapsed: 3.960813ms Oct 14 23:36:09.655: INFO: Pod "downwardapi-volume-b7befdc5-2544-4be5-86c7-5656ae95f516": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008395448s Oct 14 23:36:11.659: INFO: Pod "downwardapi-volume-b7befdc5-2544-4be5-86c7-5656ae95f516": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012460993s STEP: Saw pod success Oct 14 23:36:11.659: INFO: Pod "downwardapi-volume-b7befdc5-2544-4be5-86c7-5656ae95f516" satisfied condition "Succeeded or Failed" Oct 14 23:36:11.662: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-b7befdc5-2544-4be5-86c7-5656ae95f516 container client-container: STEP: delete the pod Oct 14 23:36:11.902: INFO: Waiting for pod downwardapi-volume-b7befdc5-2544-4be5-86c7-5656ae95f516 to disappear Oct 14 23:36:11.989: INFO: Pod downwardapi-volume-b7befdc5-2544-4be5-86c7-5656ae95f516 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:36:11.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2244" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":156,"skipped":2312,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:36:12.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:36:12.076: INFO: Creating deployment "test-recreate-deployment" Oct 14 23:36:12.086: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 14 23:36:12.127: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 14 23:36:14.305: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 14 23:36:14.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315372, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315372, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315372, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315372, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 23:36:16.311: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 14 23:36:16.346: INFO: Updating deployment test-recreate-deployment Oct 14 23:36:16.346: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 14 23:36:17.086: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1078 /apis/apps/v1/namespaces/deployment-1078/deployments/test-recreate-deployment e5561fdf-695d-4803-943d-fdbc66598f5c 2957523 2 2020-10-14 23:36:12 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-14 23:36:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 23:36:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049d24e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-14 23:36:16 +0000 UTC,LastTransitionTime:2020-10-14 23:36:16 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-10-14 23:36:16 +0000 UTC,LastTransitionTime:2020-10-14 23:36:12 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 14 23:36:17.104: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-1078 /apis/apps/v1/namespaces/deployment-1078/replicasets/test-recreate-deployment-f79dd4667 c8e9e99a-d229-49ae-bd83-db27a1ce2fe5 2957521 1 2020-10-14 23:36:16 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment e5561fdf-695d-4803-943d-fdbc66598f5c 0xc0049d2ae0 0xc0049d2ae1}] [] [{kube-controller-manager Update apps/v1 2020-10-14 23:36:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5561fdf-695d-4803-943d-fdbc66598f5c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049d2b68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 23:36:17.104: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 14 23:36:17.104: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-1078 /apis/apps/v1/namespaces/deployment-1078/replicasets/test-recreate-deployment-c96cf48f f80cdec0-2b92-4f07-8f2d-ae5669b6eb04 2957512 2 2020-10-14 23:36:12 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment e5561fdf-695d-4803-943d-fdbc66598f5c 0xc0049d29af 0xc0049d29e0}] [] [{kube-controller-manager Update apps/v1 2020-10-14 23:36:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e5561fdf-695d-4803-943d-fdbc66598f5c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0049d2a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 23:36:17.119: INFO: Pod "test-recreate-deployment-f79dd4667-nf78l" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-nf78l test-recreate-deployment-f79dd4667- deployment-1078 /api/v1/namespaces/deployment-1078/pods/test-recreate-deployment-f79dd4667-nf78l dfc36ea5-407c-4eb8-8e6f-cbe3c9845b5f 2957524 0 2020-10-14 23:36:16 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 c8e9e99a-d229-49ae-bd83-db27a1ce2fe5 0xc0049d3140 0xc0049d3141}] [] [{kube-controller-manager Update v1 2020-10-14 23:36:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c8e9e99a-d229-49ae-bd83-db27a1ce2fe5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:36:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mhcvj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mhcvj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mhcvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:36:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:36:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:36:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:36:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-10-14 23:36:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:36:17.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1078" for this suite. • [SLOW TEST:5.139 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":157,"skipped":2325,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:36:17.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3957 STEP: creating service affinity-nodeport in namespace services-3957 STEP: creating replication controller affinity-nodeport in namespace services-3957 I1014 23:36:17.560309 7 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-3957, replica count: 3 I1014 23:36:20.610588 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:36:23.610757 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:36:26.610990 7 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 23:36:26.622: INFO: Creating new exec pod Oct 14 23:36:31.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3957 execpod-affinityrhl5j -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Oct 14 23:36:31.923: INFO: stderr: "I1014 23:36:31.815517 1840 log.go:181] (0xc0004fb130) (0xc0004f28c0) Create stream\nI1014 23:36:31.815576 1840 log.go:181] (0xc0004fb130) (0xc0004f28c0) Stream added, broadcasting: 1\nI1014 23:36:31.817680 1840 log.go:181] (0xc0004fb130) Reply frame received for 1\nI1014 23:36:31.817751 1840 log.go:181] (0xc0004fb130) (0xc0004f2960) Create stream\nI1014 23:36:31.817763 1840 log.go:181] (0xc0004fb130) (0xc0004f2960) Stream added, broadcasting: 3\nI1014 23:36:31.818517 1840 log.go:181] (0xc0004fb130) Reply frame received for 3\nI1014 23:36:31.818550 1840 log.go:181] (0xc0004fb130) (0xc000f0a280) Create stream\nI1014 23:36:31.818566 1840 log.go:181] (0xc0004fb130) (0xc000f0a280) Stream added, broadcasting: 5\nI1014 23:36:31.819411 1840 log.go:181] (0xc0004fb130) Reply frame received for 5\nI1014 23:36:31.914826 1840 log.go:181] (0xc0004fb130) Data frame received for 5\nI1014 23:36:31.914859 1840 log.go:181] (0xc000f0a280) (5) Data frame handling\nI1014 23:36:31.914874 1840 log.go:181] (0xc000f0a280) (5) Data frame sent\nI1014 23:36:31.914882 1840 log.go:181] (0xc0004fb130) Data frame received for 5\nI1014 23:36:31.914890 1840 log.go:181] (0xc000f0a280) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI1014 23:36:31.914910 1840 log.go:181] (0xc000f0a280) (5) Data frame sent\nI1014 23:36:31.915277 1840 log.go:181] (0xc0004fb130) Data frame received for 3\nI1014 23:36:31.915303 1840 log.go:181] (0xc0004f2960) (3) Data frame handling\nI1014 23:36:31.915548 1840 log.go:181] (0xc0004fb130) Data frame received for 5\nI1014 23:36:31.915587 1840 log.go:181] (0xc000f0a280) (5) Data frame handling\nI1014 23:36:31.917228 1840 log.go:181] (0xc0004fb130) Data frame received for 1\nI1014 23:36:31.917250 1840 log.go:181] (0xc0004f28c0) (1) Data frame handling\nI1014 23:36:31.917286 1840 log.go:181] (0xc0004f28c0) (1) Data frame sent\nI1014 23:36:31.917314 1840 log.go:181] (0xc0004fb130) (0xc0004f28c0) Stream removed, broadcasting: 1\nI1014 23:36:31.917457 1840 log.go:181] (0xc0004fb130) Go away received\nI1014 23:36:31.917738 1840 log.go:181] (0xc0004fb130) (0xc0004f28c0) Stream removed, broadcasting: 1\nI1014 23:36:31.917759 1840 log.go:181] (0xc0004fb130) (0xc0004f2960) Stream removed, broadcasting: 3\nI1014 23:36:31.917770 1840 log.go:181] (0xc0004fb130) (0xc000f0a280) Stream removed, broadcasting: 5\n" Oct 14 23:36:31.923: INFO: stdout: "" Oct 14 23:36:31.924: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3957 execpod-affinityrhl5j -- /bin/sh -x -c nc -zv -t -w 2 10.101.121.28 80' Oct 14 23:36:32.146: INFO: stderr: "I1014 23:36:32.066659 1858 log.go:181] (0xc00003a4d0) (0xc000cba1e0) Create stream\nI1014 23:36:32.066727 1858 log.go:181] (0xc00003a4d0) (0xc000cba1e0) Stream added, broadcasting: 1\nI1014 23:36:32.068745 1858 log.go:181] (0xc00003a4d0) Reply frame received for 1\nI1014 23:36:32.068783 1858 log.go:181] (0xc00003a4d0) (0xc000916500) Create stream\nI1014 23:36:32.068795 1858 log.go:181] (0xc00003a4d0) (0xc000916500) Stream added, broadcasting: 3\nI1014 23:36:32.069933 1858 log.go:181] (0xc00003a4d0) Reply frame received for 3\nI1014 23:36:32.069984 1858 log.go:181] (0xc00003a4d0) (0xc000642000) Create stream\nI1014 23:36:32.069999 1858 log.go:181] (0xc00003a4d0) (0xc000642000) Stream added, broadcasting: 5\nI1014 23:36:32.070873 1858 log.go:181] (0xc00003a4d0) Reply frame received for 5\nI1014 23:36:32.137860 1858 log.go:181] (0xc00003a4d0) Data frame received for 5\nI1014 23:36:32.137920 1858 log.go:181] (0xc000642000) (5) Data frame handling\nI1014 23:36:32.137946 1858 log.go:181] (0xc000642000) (5) Data frame sent\nI1014 23:36:32.137965 1858 log.go:181] (0xc00003a4d0) Data frame received for 5\nI1014 23:36:32.137982 1858 log.go:181] (0xc000642000) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.121.28 80\nConnection to 10.101.121.28 80 port [tcp/http] succeeded!\nI1014 23:36:32.138038 1858 log.go:181] (0xc00003a4d0) Data frame received for 3\nI1014 23:36:32.138061 1858 log.go:181] (0xc000916500) (3) Data frame handling\nI1014 23:36:32.139444 1858 log.go:181] (0xc00003a4d0) Data frame received for 1\nI1014 23:36:32.139472 1858 log.go:181] (0xc000cba1e0) (1) Data frame handling\nI1014 23:36:32.139491 1858 log.go:181] (0xc000cba1e0) (1) Data frame sent\nI1014 23:36:32.139514 1858 log.go:181] (0xc00003a4d0) (0xc000cba1e0) Stream removed, broadcasting: 1\nI1014 23:36:32.139984 1858 log.go:181] (0xc00003a4d0) (0xc000cba1e0) Stream removed, broadcasting: 1\nI1014 23:36:32.140012 1858 log.go:181] (0xc00003a4d0) (0xc000916500) Stream removed, broadcasting: 3\nI1014 23:36:32.140238 1858 log.go:181] (0xc00003a4d0) (0xc000642000) Stream removed, broadcasting: 5\n" Oct 14 23:36:32.147: INFO: stdout: "" Oct 14 23:36:32.147: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3957 execpod-affinityrhl5j -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 31541' Oct 14 23:36:32.356: INFO: stderr: "I1014 23:36:32.287486 1877 log.go:181] (0xc000929130) (0xc00099e780) Create stream\nI1014 23:36:32.287548 1877 log.go:181] (0xc000929130) (0xc00099e780) Stream added, broadcasting: 1\nI1014 23:36:32.293700 1877 log.go:181] (0xc000929130) Reply frame received for 1\nI1014 23:36:32.293755 1877 log.go:181] (0xc000929130) (0xc000b8a6e0) Create stream\nI1014 23:36:32.293770 1877 log.go:181] (0xc000929130) (0xc000b8a6e0) Stream added, broadcasting: 3\nI1014 23:36:32.294801 1877 log.go:181] (0xc000929130) Reply frame received for 3\nI1014 23:36:32.294848 1877 log.go:181] (0xc000929130) (0xc00099e000) Create stream\nI1014 23:36:32.294868 1877 log.go:181] (0xc000929130) (0xc00099e000) Stream added, broadcasting: 5\nI1014 23:36:32.295664 1877 log.go:181] (0xc000929130) Reply frame received for 5\nI1014 23:36:32.350482 1877 log.go:181] (0xc000929130) Data frame received for 5\nI1014 23:36:32.350518 1877 log.go:181] (0xc00099e000) (5) Data frame handling\nI1014 23:36:32.350549 1877 log.go:181] (0xc00099e000) (5) Data frame sent\nI1014 23:36:32.350577 1877 log.go:181] (0xc000929130) Data frame received for 5\nI1014 23:36:32.350592 1877 log.go:181] (0xc00099e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.18 31541\nConnection to 172.18.0.18 31541 port [tcp/31541] succeeded!\nI1014 23:36:32.350734 1877 log.go:181] (0xc000929130) Data frame received for 3\nI1014 23:36:32.350750 1877 log.go:181] (0xc000b8a6e0) (3) Data frame handling\nI1014 23:36:32.352577 1877 log.go:181] (0xc000929130) Data frame received for 1\nI1014 23:36:32.352596 1877 log.go:181] (0xc00099e780) (1) Data frame handling\nI1014 23:36:32.352607 1877 log.go:181] (0xc00099e780) (1) Data frame sent\nI1014 23:36:32.352622 1877 log.go:181] (0xc000929130) (0xc00099e780) Stream removed, broadcasting: 1\nI1014 23:36:32.352741 1877 log.go:181] (0xc000929130) Go away received\nI1014 23:36:32.352986 1877 log.go:181] (0xc000929130) (0xc00099e780) Stream removed, broadcasting: 1\nI1014 23:36:32.353002 1877 log.go:181] (0xc000929130) (0xc000b8a6e0) Stream removed, broadcasting: 3\nI1014 23:36:32.353008 1877 log.go:181] (0xc000929130) (0xc00099e000) Stream removed, broadcasting: 5\n" Oct 14 23:36:32.357: INFO: stdout: "" Oct 14 23:36:32.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3957 execpod-affinityrhl5j -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 31541' Oct 14 23:36:32.577: INFO: stderr: "I1014 23:36:32.485320 1896 log.go:181] (0xc00018dc30) (0xc00013eb40) Create stream\nI1014 23:36:32.485385 1896 log.go:181] (0xc00018dc30) (0xc00013eb40) Stream added, broadcasting: 1\nI1014 23:36:32.488326 1896 log.go:181] (0xc00018dc30) Reply frame received for 1\nI1014 23:36:32.488369 1896 log.go:181] (0xc00018dc30) (0xc00013ebe0) Create stream\nI1014 23:36:32.488385 1896 log.go:181] (0xc00018dc30) (0xc00013ebe0) Stream added, broadcasting: 3\nI1014 23:36:32.489630 1896 log.go:181] (0xc00018dc30) Reply frame received for 3\nI1014 23:36:32.489671 1896 log.go:181] (0xc00018dc30) (0xc000dd8000) Create stream\nI1014 23:36:32.489693 1896 log.go:181] (0xc00018dc30) (0xc000dd8000) Stream added, broadcasting: 5\nI1014 23:36:32.490605 1896 log.go:181] (0xc00018dc30) Reply frame received for 5\nI1014 23:36:32.569731 1896 log.go:181] (0xc00018dc30) Data frame received for 5\nI1014 23:36:32.569758 1896 log.go:181] (0xc000dd8000) (5) Data frame handling\nI1014 23:36:32.569782 1896 log.go:181] (0xc000dd8000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.17 31541\nI1014 23:36:32.570210 1896 log.go:181] (0xc00018dc30) Data frame received for 5\nI1014 23:36:32.570245 1896 log.go:181] (0xc000dd8000) (5) Data frame handling\nI1014 23:36:32.570262 1896 log.go:181] (0xc000dd8000) (5) Data frame sent\nConnection to 172.18.0.17 31541 port [tcp/31541] succeeded!\nI1014 23:36:32.570605 1896 log.go:181] (0xc00018dc30) Data frame received for 5\nI1014 23:36:32.570638 1896 log.go:181] (0xc000dd8000) (5) Data frame handling\nI1014 23:36:32.570680 1896 log.go:181] (0xc00018dc30) Data frame received for 3\nI1014 23:36:32.570705 1896 log.go:181] (0xc00013ebe0) (3) Data frame handling\nI1014 23:36:32.571824 1896 log.go:181] (0xc00018dc30) Data frame received for 1\nI1014 23:36:32.571850 1896 log.go:181] (0xc00013eb40) (1) Data frame handling\nI1014 23:36:32.571869 1896 log.go:181] (0xc00013eb40) (1) Data frame sent\nI1014 23:36:32.571886 1896 log.go:181] (0xc00018dc30) (0xc00013eb40) Stream removed, broadcasting: 1\nI1014 23:36:32.571906 1896 log.go:181] (0xc00018dc30) Go away received\nI1014 23:36:32.572212 1896 log.go:181] (0xc00018dc30) (0xc00013eb40) Stream removed, broadcasting: 1\nI1014 23:36:32.572229 1896 log.go:181] (0xc00018dc30) (0xc00013ebe0) Stream removed, broadcasting: 3\nI1014 23:36:32.572236 1896 log.go:181] (0xc00018dc30) (0xc000dd8000) Stream removed, broadcasting: 5\n" Oct 14 23:36:32.577: INFO: stdout: "" Oct 14 23:36:32.577: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3957 execpod-affinityrhl5j -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.18:31541/ ; done' Oct 14 23:36:32.878: INFO: stderr: "I1014 23:36:32.727851 1914 log.go:181] (0xc0005e6dc0) (0xc0006dc500) Create stream\nI1014 23:36:32.727910 1914 log.go:181] (0xc0005e6dc0) (0xc0006dc500) Stream added, broadcasting: 1\nI1014 23:36:32.729513 1914 log.go:181] (0xc0005e6dc0) Reply frame received for 1\nI1014 23:36:32.729555 1914 log.go:181] (0xc0005e6dc0) (0xc000a82280) Create stream\nI1014 23:36:32.729564 1914 log.go:181] (0xc0005e6dc0) (0xc000a82280) Stream added, broadcasting: 3\nI1014 23:36:32.730353 1914 log.go:181] (0xc0005e6dc0) Reply frame received for 3\nI1014 23:36:32.730389 1914 log.go:181] (0xc0005e6dc0) (0xc0009981e0) Create stream\nI1014 23:36:32.730397 1914 log.go:181] (0xc0005e6dc0) (0xc0009981e0) Stream added, broadcasting: 5\nI1014 23:36:32.731091 1914 log.go:181] (0xc0005e6dc0) Reply frame received for 5\nI1014 23:36:32.779213 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.779250 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.779262 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.779282 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.779312 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.779327 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.784085 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.784115 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.784136 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.784484 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.784514 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.784531 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.784552 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.784561 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.784568 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.789750 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.789776 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.789802 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.790326 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.790362 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.790385 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.790421 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.790443 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.790461 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.795014 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.795037 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.795057 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.795547 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.795577 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.795608 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\nI1014 23:36:32.795620 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\n+ echo\n+ curl -q -s --connect-timeout 2I1014 23:36:32.795628 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.795651 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n http://172.18.0.18:31541/\nI1014 23:36:32.795693 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.795731 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.795748 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.801411 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.801432 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.801464 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.801935 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.801952 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.801961 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.801973 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.801982 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.801988 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.808356 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.808383 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.808401 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.809033 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.809052 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.809070 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.809261 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.809282 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.809299 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.813453 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.813483 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.813500 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.814190 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.814210 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.814224 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.814242 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.814254 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.814272 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.820583 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.820603 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.820620 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.821457 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.821471 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.821483 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\nI1014 23:36:32.821499 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\n+ echo\nI1014 23:36:32.821516 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.821536 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.821563 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.821603 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.821620 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.826963 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.826992 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.827016 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.827775 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.827799 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.827810 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.827845 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.827865 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.827884 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.831691 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.831720 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.831730 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.832503 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.832524 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.832536 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.832577 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.832596 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.832614 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.839344 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.839361 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.839383 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.839946 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.839965 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.839986 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.840019 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.840038 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.840062 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\nI1014 23:36:32.845173 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.845211 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.845235 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.845626 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.845664 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.845685 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.845711 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.845728 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.845755 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.848755 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.848774 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.848806 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.849330 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.849349 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.849358 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.849403 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.849428 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.849445 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.853238 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.853252 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.853264 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.853857 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.853887 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.853924 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.853949 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.853960 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.853969 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.857527 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.857549 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.857569 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.857959 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.857973 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.857981 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.858005 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.858023 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.858042 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.863564 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.863595 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.863615 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.864196 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.864223 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.864238 1914 log.go:181] (0xc0009981e0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:31541/\nI1014 23:36:32.864257 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.864264 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.864272 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.868408 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.868429 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.868453 1914 log.go:181] (0xc000a82280) (3) Data frame sent\nI1014 23:36:32.869171 1914 log.go:181] (0xc0005e6dc0) Data frame received for 3\nI1014 23:36:32.869216 1914 log.go:181] (0xc000a82280) (3) Data frame handling\nI1014 23:36:32.869561 1914 log.go:181] (0xc0005e6dc0) Data frame received for 5\nI1014 23:36:32.869576 1914 log.go:181] (0xc0009981e0) (5) Data frame handling\nI1014 23:36:32.870942 1914 log.go:181] (0xc0005e6dc0) Data frame received for 1\nI1014 23:36:32.870967 1914 log.go:181] (0xc0006dc500) (1) Data frame handling\nI1014 23:36:32.870982 1914 log.go:181] (0xc0006dc500) (1) Data frame sent\nI1014 23:36:32.870999 1914 log.go:181] (0xc0005e6dc0) (0xc0006dc500) Stream removed, broadcasting: 1\nI1014 23:36:32.871018 1914 log.go:181] (0xc0005e6dc0) Go away received\nI1014 23:36:32.871508 1914 log.go:181] (0xc0005e6dc0) (0xc0006dc500) Stream removed, broadcasting: 1\nI1014 23:36:32.871534 1914 log.go:181] (0xc0005e6dc0) (0xc000a82280) Stream removed, broadcasting: 3\nI1014 23:36:32.871547 1914 log.go:181] (0xc0005e6dc0) (0xc0009981e0) Stream removed, broadcasting: 5\n" Oct 14 23:36:32.879: INFO: stdout: "\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t\naffinity-nodeport-2cz2t" Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Received response from host: affinity-nodeport-2cz2t Oct 14 23:36:32.879: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-3957, will wait for the garbage collector to delete the pods Oct 14 23:36:32.960: INFO: Deleting ReplicationController affinity-nodeport took: 6.467693ms Oct 14 23:36:33.060: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.296388ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:36:40.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3957" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:23.283 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":158,"skipped":2359,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:36:40.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-1b836b10-ba0d-475f-b4da-314d4be3683d STEP: Creating a pod to test consume secrets Oct 14 23:36:40.508: INFO: Waiting up to 5m0s for pod "pod-secrets-f5a1f4fb-d1b9-4ccc-a0fe-33538dc708bf" in namespace "secrets-9421" to be "Succeeded or Failed" Oct 14 23:36:40.530: INFO: Pod "pod-secrets-f5a1f4fb-d1b9-4ccc-a0fe-33538dc708bf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.160118ms Oct 14 23:36:42.535: INFO: Pod "pod-secrets-f5a1f4fb-d1b9-4ccc-a0fe-33538dc708bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02652185s Oct 14 23:36:44.540: INFO: Pod "pod-secrets-f5a1f4fb-d1b9-4ccc-a0fe-33538dc708bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031272892s STEP: Saw pod success Oct 14 23:36:44.540: INFO: Pod "pod-secrets-f5a1f4fb-d1b9-4ccc-a0fe-33538dc708bf" satisfied condition "Succeeded or Failed" Oct 14 23:36:44.543: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-f5a1f4fb-d1b9-4ccc-a0fe-33538dc708bf container secret-volume-test: STEP: delete the pod Oct 14 23:36:44.575: INFO: Waiting for pod pod-secrets-f5a1f4fb-d1b9-4ccc-a0fe-33538dc708bf to disappear Oct 14 23:36:44.584: INFO: Pod pod-secrets-f5a1f4fb-d1b9-4ccc-a0fe-33538dc708bf no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:36:44.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9421" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":159,"skipped":2374,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:36:44.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 14 23:36:49.747: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:36:49.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1877" for this suite. • [SLOW TEST:5.384 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":160,"skipped":2376,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:36:49.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1096 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1096 STEP: creating replication controller externalsvc in namespace services-1096 I1014 23:36:50.181022 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1096, replica count: 2 I1014 23:36:53.231389 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:36:56.231614 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:36:59.231889 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 14 23:36:59.311: INFO: Creating new exec pod Oct 14 23:37:03.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-1096 execpod7l59g -- /bin/sh -x -c nslookup nodeport-service.services-1096.svc.cluster.local' Oct 14 23:37:03.609: INFO: stderr: "I1014 23:37:03.483536 1932 log.go:181] (0xc000f36f20) (0xc000826640) Create stream\nI1014 23:37:03.483587 1932 log.go:181] (0xc000f36f20) (0xc000826640) Stream added, broadcasting: 1\nI1014 23:37:03.490284 1932 log.go:181] (0xc000f36f20) Reply frame received for 1\nI1014 23:37:03.490332 1932 log.go:181] (0xc000f36f20) (0xc000bac000) Create stream\nI1014 23:37:03.490349 1932 log.go:181] (0xc000f36f20) (0xc000bac000) Stream added, broadcasting: 3\nI1014 23:37:03.491237 1932 log.go:181] (0xc000f36f20) Reply frame received for 3\nI1014 23:37:03.491268 1932 log.go:181] (0xc000f36f20) (0xc0001a10e0) Create stream\nI1014 23:37:03.491284 1932 log.go:181] (0xc000f36f20) (0xc0001a10e0) Stream added, broadcasting: 5\nI1014 23:37:03.492103 1932 log.go:181] (0xc000f36f20) Reply frame received for 5\nI1014 23:37:03.590724 1932 log.go:181] (0xc000f36f20) Data frame received for 5\nI1014 23:37:03.590752 1932 log.go:181] (0xc0001a10e0) (5) Data frame handling\nI1014 23:37:03.590774 1932 log.go:181] (0xc0001a10e0) (5) Data frame sent\n+ nslookup nodeport-service.services-1096.svc.cluster.local\nI1014 23:37:03.599880 1932 log.go:181] (0xc000f36f20) Data frame received for 3\nI1014 23:37:03.599917 1932 log.go:181] (0xc000bac000) (3) Data frame handling\nI1014 23:37:03.599937 1932 log.go:181] (0xc000bac000) (3) Data frame sent\nI1014 23:37:03.600684 1932 log.go:181] (0xc000f36f20) Data frame received for 3\nI1014 23:37:03.600710 1932 log.go:181] (0xc000bac000) (3) Data frame handling\nI1014 23:37:03.600729 1932 log.go:181] (0xc000bac000) (3) Data frame sent\nI1014 23:37:03.601283 1932 log.go:181] (0xc000f36f20) Data frame received for 5\nI1014 23:37:03.601315 1932 log.go:181] (0xc0001a10e0) (5) Data frame handling\nI1014 23:37:03.601370 1932 log.go:181] (0xc000f36f20) Data frame received for 3\nI1014 23:37:03.601393 1932 log.go:181] (0xc000bac000) (3) Data frame handling\nI1014 23:37:03.602755 1932 log.go:181] (0xc000f36f20) Data frame received for 1\nI1014 23:37:03.602769 1932 log.go:181] (0xc000826640) (1) Data frame handling\nI1014 23:37:03.602778 1932 log.go:181] (0xc000826640) (1) Data frame sent\nI1014 23:37:03.603034 1932 log.go:181] (0xc000f36f20) (0xc000826640) Stream removed, broadcasting: 1\nI1014 23:37:03.603058 1932 log.go:181] (0xc000f36f20) Go away received\nI1014 23:37:03.603573 1932 log.go:181] (0xc000f36f20) (0xc000826640) Stream removed, broadcasting: 1\nI1014 23:37:03.603594 1932 log.go:181] (0xc000f36f20) (0xc000bac000) Stream removed, broadcasting: 3\nI1014 23:37:03.603605 1932 log.go:181] (0xc000f36f20) (0xc0001a10e0) Stream removed, broadcasting: 5\n" Oct 14 23:37:03.609: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1096.svc.cluster.local\tcanonical name = externalsvc.services-1096.svc.cluster.local.\nName:\texternalsvc.services-1096.svc.cluster.local\nAddress: 10.109.182.111\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1096, will wait for the garbage collector to delete the pods Oct 14 23:37:03.669: INFO: Deleting ReplicationController externalsvc took: 6.572061ms Oct 14 23:37:03.769: INFO: Terminating ReplicationController externalsvc pods took: 100.186957ms Oct 14 23:37:10.396: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:37:10.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1096" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.497 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":161,"skipped":2392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:37:10.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:37:10.531: INFO: Creating deployment "webserver-deployment" Oct 14 23:37:10.538: INFO: Waiting for observed generation 1 Oct 14 23:37:12.547: INFO: Waiting for all required pods to come up Oct 14 23:37:12.552: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 14 23:37:24.561: INFO: Waiting for deployment "webserver-deployment" to complete Oct 14 23:37:24.566: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 14 23:37:24.574: INFO: Updating deployment webserver-deployment Oct 14 23:37:24.574: INFO: Waiting for observed generation 2 Oct 14 23:37:26.588: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 14 23:37:26.590: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 14 23:37:26.593: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 14 23:37:26.600: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 14 23:37:26.600: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 14 23:37:26.603: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 14 23:37:26.607: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 14 23:37:26.607: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 14 23:37:26.615: INFO: Updating deployment webserver-deployment Oct 14 23:37:26.615: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 14 23:37:26.746: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 14 23:37:26.798: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 14 23:37:26.986: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6271 /apis/apps/v1/namespaces/deployment-6271/deployments/webserver-deployment b854ca49-b479-49df-a7d3-ac32aeed9d9c 2958226 3 2020-10-14 23:37:10 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0046c79b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-10-14 23:37:25 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-14 23:37:26 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 14 23:37:27.143: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6271 /apis/apps/v1/namespaces/deployment-6271/replicasets/webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 2958280 3 2020-10-14 23:37:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment b854ca49-b479-49df-a7d3-ac32aeed9d9c 0xc0048244f7 0xc0048244f8}] [] [{kube-controller-manager Update apps/v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b854ca49-b479-49df-a7d3-ac32aeed9d9c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004824578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 23:37:27.143: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 14 23:37:27.143: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-6271 /apis/apps/v1/namespaces/deployment-6271/replicasets/webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 2958270 3 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment b854ca49-b479-49df-a7d3-ac32aeed9d9c 0xc0048245d7 0xc0048245d8}] [] [{kube-controller-manager Update apps/v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b854ca49-b479-49df-a7d3-ac32aeed9d9c\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004824648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 14 23:37:27.254: INFO: Pod "webserver-deployment-795d758f88-5jbbf" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-5jbbf webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-5jbbf 0a2c7d98-8826-4950-b388-a07d06b848bc 2958295 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004824b47 0xc004824b48}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-10-14 23:37:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.254: INFO: Pod "webserver-deployment-795d758f88-cswnr" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-cswnr webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-cswnr 1ada416a-8eff-461f-a99f-9b6a8b428fd9 2958208 0 2020-10-14 23:37:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004824cf7 0xc004824cf8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-10-14 23:37:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.254: INFO: Pod "webserver-deployment-795d758f88-fb8f7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fb8f7 webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-fb8f7 5e78f9f3-25cf-47d0-a850-fda9372d0bb7 2958261 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004824ea7 0xc004824ea8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.255: INFO: Pod "webserver-deployment-795d758f88-fpxct" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-fpxct webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-fpxct 7806dc50-b679-424b-94f7-e490dd282040 2958187 0 2020-10-14 23:37:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004824ff7 0xc004824ff8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-10-14 23:37:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.255: INFO: Pod "webserver-deployment-795d758f88-n4p9n" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-n4p9n webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-n4p9n 93a5d427-f532-432a-92fe-bc3e1ba088c2 2958207 0 2020-10-14 23:37:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc0048251e7 0xc0048251e8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-10-14 23:37:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.255: INFO: Pod "webserver-deployment-795d758f88-p56cc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-p56cc webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-p56cc e7f1c41b-9726-42d8-b92c-4963a7118a4b 2958281 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc0048253a7 0xc0048253a8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.255: INFO: Pod "webserver-deployment-795d758f88-pqbll" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pqbll webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-pqbll 81db3e62-c240-48e6-8448-957dd48d4fcf 2958260 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004825517 0xc004825518}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.255: INFO: Pod "webserver-deployment-795d758f88-r762l" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-r762l webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-r762l d271cf7e-26da-49f0-b13a-e6673be6a7e7 2958245 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004825657 0xc004825658}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.255: INFO: Pod "webserver-deployment-795d758f88-s9h88" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-s9h88 webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-s9h88 6b8f0455-a8cb-4492-ab46-52c507f1c6d9 2958178 0 2020-10-14 23:37:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004825797 0xc004825798}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-10-14 23:37:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.256: INFO: Pod "webserver-deployment-795d758f88-ssxk9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ssxk9 webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-ssxk9 b347e0b2-cff4-4c1f-8dd0-aee54bb34c04 2958188 0 2020-10-14 23:37:24 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004825947 0xc004825948}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-10-14 23:37:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.256: INFO: Pod "webserver-deployment-795d758f88-tswc9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-tswc9 webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-tswc9 89c4384b-1279-468c-8cca-65f9ed49f727 2958259 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004825af7 0xc004825af8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.256: INFO: Pod "webserver-deployment-795d758f88-wsm5b" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wsm5b webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-wsm5b e629062b-2609-4662-8a86-4be63755598a 2958298 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004825c37 0xc004825c38}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-10-14 23:37:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.256: INFO: Pod "webserver-deployment-795d758f88-zn6rg" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zn6rg webserver-deployment-795d758f88- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-795d758f88-zn6rg 8b67ee5b-3d95-4bdd-bc43-6497737af72b 2958276 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 879a94a4-4891-4996-b156-ad4f61abade5 0xc004825df7 0xc004825df8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"879a94a4-4891-4996-b156-ad4f61abade5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.256: INFO: Pod "webserver-deployment-dd94f59b7-2jx8d" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-2jx8d webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-2jx8d 6156687d-0801-4029-a297-742f1257fd71 2958249 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc004825f37 0xc004825f38}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.256: INFO: Pod "webserver-deployment-dd94f59b7-46l2v" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-46l2v webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-46l2v ac0bb0bb-8e09-43f5-87ba-6db58a1866b8 2958247 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c4377 0xc0033c4378}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.256: INFO: Pod "webserver-deployment-dd94f59b7-476xz" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-476xz webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-476xz 0dcfc919-a6a5-4897-91c8-74af9c678e86 2958253 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c4667 0xc0033c4668}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.257: INFO: Pod "webserver-deployment-dd94f59b7-47s7w" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-47s7w webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-47s7w b278c805-d9bf-4fb0-b857-b5ca7dee9a03 2958277 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c4867 0xc0033c4868}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.257: INFO: Pod "webserver-deployment-dd94f59b7-4kb8g" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-4kb8g webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-4kb8g 999123b4-7a25-4003-9226-7a8a3b540a92 2958232 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c4a17 0xc0033c4a18}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.257: INFO: Pod "webserver-deployment-dd94f59b7-6vjk2" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6vjk2 webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-6vjk2 4bf26270-ea9a-4d97-8342-d6347a610225 2958075 0 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c4b47 0xc0033c4b48}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.230\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:10.244.2.230,StartTime:2020-10-14 23:37:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:37:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://07db1197846151c3094a0b1d0210211bc1f6cdb591ec0a1d94c8cbd902aaf6fe,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.230,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.257: INFO: Pod "webserver-deployment-dd94f59b7-7jgqf" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-7jgqf webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-7jgqf 12748cc1-af04-467c-bd6c-64a343d0e368 2958124 0 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c4cf7 0xc0033c4cf8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.46\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.46,StartTime:2020-10-14 23:37:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:37:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://9abe4c13a26dbecc62654ba792f3c4f435cafcef3a3efd96b6084df0fdd258ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.257: INFO: Pod "webserver-deployment-dd94f59b7-b45t8" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-b45t8 webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-b45t8 0310a76e-765c-43eb-949c-da91403406ec 2958272 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c4ea7 0xc0033c4ea8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.257: INFO: Pod "webserver-deployment-dd94f59b7-bc7g9" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-bc7g9 webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-bc7g9 79ef0e28-f089-4438-9b73-5a95b30c9366 2958097 0 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c4fd7 0xc0033c4fd8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.45\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.45,StartTime:2020-10-14 23:37:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:37:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0c96cd302640d2a68455e47736ce332e36101ea536ca931a42945e9b170df6d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.257: INFO: Pod "webserver-deployment-dd94f59b7-cnmqk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-cnmqk webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-cnmqk 6b356513-2054-4b0b-9516-432e6c55cc0a 2958068 0 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c5187 0xc0033c5188}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.43,StartTime:2020-10-14 23:37:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:37:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://89058d669df9e597493ddcf20c8239d48ea4d684a53fd84d461b5decba817a13,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.258: INFO: Pod "webserver-deployment-dd94f59b7-gp989" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-gp989 webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-gp989 77e053bf-3a31-4025-aee3-2016c1a46476 2958258 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c5347 0xc0033c5348}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:,StartTime:2020-10-14 23:37:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.258: INFO: Pod "webserver-deployment-dd94f59b7-jrnvj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jrnvj webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-jrnvj ce7fac1b-d844-45e5-92d9-21132652981f 2958273 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c54d7 0xc0033c54d8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.258: INFO: Pod "webserver-deployment-dd94f59b7-lggc5" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lggc5 webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-lggc5 780fdd43-752e-4e56-8655-15e0edf98962 2958263 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c5607 0xc0033c5608}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:,StartTime:2020-10-14 23:37:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.258: INFO: Pod "webserver-deployment-dd94f59b7-lndkr" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lndkr webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-lndkr d21093b4-8a4c-498e-8dc1-d0373d947e51 2958110 0 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c57a7 0xc0033c57a8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.232\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:10.244.2.232,StartTime:2020-10-14 23:37:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:37:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://71e89cac27c6ea32465c3467a2f829adebb633394d7d090a749710dfc03e4c94,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.232,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.258: INFO: Pod "webserver-deployment-dd94f59b7-lvqpd" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-lvqpd webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-lvqpd 6e83daa6-2e12-48f1-9305-3dbe2860724c 2958088 0 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c5987 0xc0033c5988}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.231\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:10.244.2.231,StartTime:2020-10-14 23:37:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:37:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://51a81bd30bbfdd21439c2b116ecda27c3a16bdb11c2327f4cfbbcf2af2c8462d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.231,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.258: INFO: Pod "webserver-deployment-dd94f59b7-mgzhn" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-mgzhn webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-mgzhn 4f3c386b-a04e-4fed-86be-eeee8c92b0c4 2958125 0 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c5b57 0xc0033c5b58}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.233\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.18,PodIP:10.244.2.233,StartTime:2020-10-14 23:37:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:37:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://953f861f72db108243d1d19ce75158aa227a06d8d15d2f996f73a35fc5a8f681,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.233,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.259: INFO: Pod "webserver-deployment-dd94f59b7-qbccg" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qbccg webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-qbccg 5295f93e-e656-4177-bebd-ceb281555e01 2958113 0 2020-10-14 23:37:10 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c5d07 0xc0033c5d08}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:37:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.44,StartTime:2020-10-14 23:37:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:37:20 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bad074e12e5b7b3c956ae447928003423511c683dd001b25715a792d7a9997ab,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.259: INFO: Pod "webserver-deployment-dd94f59b7-qfb8g" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qfb8g webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-qfb8g cab955c6-08ed-4c44-a4fc-21d1ddce0980 2958275 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c5eb7 0xc0033c5eb8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.259: INFO: Pod "webserver-deployment-dd94f59b7-rz5nt" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rz5nt webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-rz5nt e99e097e-65fa-45f9-aa99-0742ec916ae9 2958274 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0033c5fe7 0xc0033c5fe8}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 14 23:37:27.259: INFO: Pod "webserver-deployment-dd94f59b7-z7dkj" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-z7dkj webserver-deployment-dd94f59b7- deployment-6271 /api/v1/namespaces/deployment-6271/pods/webserver-deployment-dd94f59b7-z7dkj 6be2be08-1a36-492c-a7f7-96f9ed82685d 2958254 0 2020-10-14 23:37:26 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 37843d57-bfd2-4625-8169-ed0f734a8648 0xc0034c0337 0xc0034c0338}] [] [{kube-controller-manager Update v1 2020-10-14 23:37:26 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37843d57-bfd2-4625-8169-ed0f734a8648\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-r5r7h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-r5r7h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-r5r7h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:37:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:37:27.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6271" for this suite. • [SLOW TEST:16.952 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":162,"skipped":2419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:37:27.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-449da2b6-f9c5-40ec-9218-26fe950b85a9 STEP: Creating a pod to test consume secrets Oct 14 23:37:27.717: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440" in namespace "projected-2473" to be "Succeeded or Failed" Oct 14 23:37:27.744: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Pending", Reason="", readiness=false. Elapsed: 26.630019ms Oct 14 23:37:30.118: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400924348s Oct 14 23:37:32.302: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Pending", Reason="", readiness=false. Elapsed: 4.584907229s Oct 14 23:37:34.753: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Pending", Reason="", readiness=false. Elapsed: 7.036135335s Oct 14 23:37:37.231: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Pending", Reason="", readiness=false. Elapsed: 9.5146102s Oct 14 23:37:39.356: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Pending", Reason="", readiness=false. Elapsed: 11.639569727s Oct 14 23:37:41.631: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Pending", Reason="", readiness=false. Elapsed: 13.9143553s Oct 14 23:37:43.956: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Pending", Reason="", readiness=false. Elapsed: 16.238704234s Oct 14 23:37:46.117: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Running", Reason="", readiness=true. Elapsed: 18.400501129s Oct 14 23:37:48.165: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Running", Reason="", readiness=true. Elapsed: 20.447927099s Oct 14 23:37:50.272: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.555232446s STEP: Saw pod success Oct 14 23:37:50.272: INFO: Pod "pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440" satisfied condition "Succeeded or Failed" Oct 14 23:37:50.275: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440 container projected-secret-volume-test: STEP: delete the pod Oct 14 23:37:50.887: INFO: Waiting for pod pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440 to disappear Oct 14 23:37:51.093: INFO: Pod pod-projected-secrets-c28b6516-0443-4497-a913-a0f1fe183440 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:37:51.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2473" for this suite. • [SLOW TEST:23.673 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2459,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:37:51.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Oct 14 23:37:51.303: INFO: created test-podtemplate-1 Oct 14 23:37:51.499: INFO: created test-podtemplate-2 Oct 14 23:37:51.638: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 14 23:37:51.655: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 14 23:37:52.040: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:37:52.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-8058" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":164,"skipped":2509,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:37:52.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 23:37:52.575: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 23:37:52.609: INFO: Waiting for terminating namespaces to be deleted... Oct 14 23:37:52.619: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Oct 14 23:37:52.625: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:37:52.625: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 23:37:52.625: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:37:52.625: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 23:37:52.625: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Oct 14 23:37:52.628: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:37:52.628: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 23:37:52.628: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Oct 14 23:37:52.628: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.163e002425e614bc], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.163e002429643361], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:37:53.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3298" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":165,"skipped":2510,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:37:53.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4412.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4412.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4412.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4412.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4412.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4412.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 14 23:38:01.844: INFO: DNS probes using dns-4412/dns-test-7bd67bac-df17-4a9c-a61e-26ebe96947cc succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:38:02.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4412" for this suite. • [SLOW TEST:8.836 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":166,"skipped":2523,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:38:02.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:38:02.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-8358" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":167,"skipped":2529,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:38:02.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1014 23:38:03.841012 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 23:39:06.019: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:39:06.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3187" for this suite. • [SLOW TEST:63.434 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":168,"skipped":2532,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:39:06.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e83972bb-d05c-4756-83ec-f8c3f0cb0fe2 STEP: Creating a pod to test consume configMaps Oct 14 23:39:06.152: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4957597a-6161-4b26-a034-8476911b4429" in namespace "projected-4345" to be "Succeeded or Failed" Oct 14 23:39:06.155: INFO: Pod "pod-projected-configmaps-4957597a-6161-4b26-a034-8476911b4429": Phase="Pending", Reason="", readiness=false. Elapsed: 3.267902ms Oct 14 23:39:08.159: INFO: Pod "pod-projected-configmaps-4957597a-6161-4b26-a034-8476911b4429": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007379677s Oct 14 23:39:10.163: INFO: Pod "pod-projected-configmaps-4957597a-6161-4b26-a034-8476911b4429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011321961s STEP: Saw pod success Oct 14 23:39:10.163: INFO: Pod "pod-projected-configmaps-4957597a-6161-4b26-a034-8476911b4429" satisfied condition "Succeeded or Failed" Oct 14 23:39:10.165: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-4957597a-6161-4b26-a034-8476911b4429 container projected-configmap-volume-test: STEP: delete the pod Oct 14 23:39:10.300: INFO: Waiting for pod pod-projected-configmaps-4957597a-6161-4b26-a034-8476911b4429 to disappear Oct 14 23:39:10.348: INFO: Pod pod-projected-configmaps-4957597a-6161-4b26-a034-8476911b4429 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:39:10.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4345" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":169,"skipped":2580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:39:10.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 14 23:39:14.568: INFO: &Pod{ObjectMeta:{send-events-54c27e46-330b-478f-bc99-6b11d4dce6cc events-3703 /api/v1/namespaces/events-3703/pods/send-events-54c27e46-330b-478f-bc99-6b11d4dce6cc f88781df-f11d-416f-b31b-e2564c355e5b 2959003 0 2020-10-14 23:39:10 +0000 UTC map[name:foo time:543363654] map[] [] [] [{e2e.test Update v1 2020-10-14 23:39:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:39:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7hwx9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7hwx9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7hwx9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:39:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:39:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:39:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:39:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.64,StartTime:2020-10-14 23:39:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:39:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://d6f518fdf47f022aac00189d08f36ce45ce53dde94ea61ae68b294cbf0e91599,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 14 23:39:16.575: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 14 23:39:18.581: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:39:18.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3703" for this suite. • [SLOW TEST:8.290 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":170,"skipped":2613,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:39:18.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 14 23:39:22.844: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:39:22.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5811" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2635,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:39:22.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-6fa1269e-a964-439f-b00e-393d2ec027a3 in namespace container-probe-9535 Oct 14 23:39:26.960: INFO: Started pod test-webserver-6fa1269e-a964-439f-b00e-393d2ec027a3 in namespace container-probe-9535 STEP: checking the pod's current state and verifying that restartCount is present Oct 14 23:39:26.963: INFO: Initial restart count of pod test-webserver-6fa1269e-a964-439f-b00e-393d2ec027a3 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:43:27.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9535" for this suite. • [SLOW TEST:244.770 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":172,"skipped":2654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:43:27.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Oct 14 23:43:27.938: INFO: Waiting up to 5m0s for pod "client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745" in namespace "containers-3256" to be "Succeeded or Failed" Oct 14 23:43:28.023: INFO: Pod "client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745": Phase="Pending", Reason="", readiness=false. Elapsed: 85.143332ms Oct 14 23:43:30.034: INFO: Pod "client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095620864s Oct 14 23:43:32.038: INFO: Pod "client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099825366s Oct 14 23:43:34.043: INFO: Pod "client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105235843s STEP: Saw pod success Oct 14 23:43:34.043: INFO: Pod "client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745" satisfied condition "Succeeded or Failed" Oct 14 23:43:34.046: INFO: Trying to get logs from node leguer-worker pod client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745 container test-container: STEP: delete the pod Oct 14 23:43:34.147: INFO: Waiting for pod client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745 to disappear Oct 14 23:43:34.153: INFO: Pod client-containers-df4c0840-0c07-4878-b0ab-bdd3e6a80745 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:43:34.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3256" for this suite. • [SLOW TEST:6.508 seconds] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":173,"skipped":2693,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:43:34.161: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 14 23:43:34.216: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:43:42.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8909" for this suite. • [SLOW TEST:8.009 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":174,"skipped":2714,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:43:42.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1014 23:43:52.292527 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 14 23:44:54.312: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:44:54.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9947" for this suite. • [SLOW TEST:72.152 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":175,"skipped":2725,"failed":0} SSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:44:54.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:44:54.407: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 14 23:44:56.472: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:44:57.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-695" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":176,"skipped":2728,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:44:57.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 14 23:44:58.194: INFO: namespace kubectl-7957 Oct 14 23:44:58.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7957' Oct 14 23:45:02.063: INFO: stderr: "" Oct 14 23:45:02.063: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 14 23:45:03.102: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:45:03.102: INFO: Found 0 / 1 Oct 14 23:45:04.526: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:45:04.526: INFO: Found 0 / 1 Oct 14 23:45:05.067: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:45:05.067: INFO: Found 0 / 1 Oct 14 23:45:06.068: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:45:06.068: INFO: Found 0 / 1 Oct 14 23:45:07.067: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:45:07.067: INFO: Found 1 / 1 Oct 14 23:45:07.067: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 14 23:45:07.069: INFO: Selector matched 1 pods for map[app:agnhost] Oct 14 23:45:07.069: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 14 23:45:07.069: INFO: wait on agnhost-primary startup in kubectl-7957 Oct 14 23:45:07.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config logs agnhost-primary-8gv2t agnhost-primary --namespace=kubectl-7957' Oct 14 23:45:07.201: INFO: stderr: "" Oct 14 23:45:07.201: INFO: stdout: "Paused\n" STEP: exposing RC Oct 14 23:45:07.201: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7957' Oct 14 23:45:07.367: INFO: stderr: "" Oct 14 23:45:07.367: INFO: stdout: "service/rm2 exposed\n" Oct 14 23:45:07.373: INFO: Service rm2 in namespace kubectl-7957 found. STEP: exposing service Oct 14 23:45:09.380: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7957' Oct 14 23:45:09.522: INFO: stderr: "" Oct 14 23:45:09.522: INFO: stdout: "service/rm3 exposed\n" Oct 14 23:45:09.538: INFO: Service rm3 in namespace kubectl-7957 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:45:11.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7957" for this suite. • [SLOW TEST:13.821 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":177,"skipped":2739,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:45:11.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:45:15.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2024" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":178,"skipped":2752,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:45:15.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 14 23:45:15.872: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:45:33.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-460" for this suite. • [SLOW TEST:17.852 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":179,"skipped":2755,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:45:33.670: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Oct 14 23:45:39.787: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9294 PodName:pod-sharedvolume-7e302a0b-089e-46bb-be78-2189b720a811 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 14 23:45:39.787: INFO: >>> kubeConfig: /root/.kube/config I1014 23:45:39.822143 7 log.go:181] (0xc0034364d0) (0xc003d650e0) Create stream I1014 23:45:39.822201 7 log.go:181] (0xc0034364d0) (0xc003d650e0) Stream added, broadcasting: 1 I1014 23:45:39.824165 7 log.go:181] (0xc0034364d0) Reply frame received for 1 I1014 23:45:39.824222 7 log.go:181] (0xc0034364d0) (0xc003d65180) Create stream I1014 23:45:39.824238 7 log.go:181] (0xc0034364d0) (0xc003d65180) Stream added, broadcasting: 3 I1014 23:45:39.825406 7 log.go:181] (0xc0034364d0) Reply frame received for 3 I1014 23:45:39.825458 7 log.go:181] (0xc0034364d0) (0xc0040314a0) Create stream I1014 23:45:39.825475 7 log.go:181] (0xc0034364d0) (0xc0040314a0) Stream added, broadcasting: 5 I1014 23:45:39.826294 7 log.go:181] (0xc0034364d0) Reply frame received for 5 I1014 23:45:39.895957 7 log.go:181] (0xc0034364d0) Data frame received for 5 I1014 23:45:39.896012 7 log.go:181] (0xc0040314a0) (5) Data frame handling I1014 23:45:39.896037 7 log.go:181] (0xc0034364d0) Data frame received for 3 I1014 23:45:39.896048 7 log.go:181] (0xc003d65180) (3) Data frame handling I1014 23:45:39.896058 7 log.go:181] (0xc003d65180) (3) Data frame sent I1014 23:45:39.896072 7 log.go:181] (0xc0034364d0) Data frame received for 3 I1014 23:45:39.896089 7 log.go:181] (0xc003d65180) (3) Data frame handling I1014 23:45:39.897939 7 log.go:181] (0xc0034364d0) Data frame received for 1 I1014 23:45:39.897963 7 log.go:181] (0xc003d650e0) (1) Data frame handling I1014 23:45:39.897976 7 log.go:181] (0xc003d650e0) (1) Data frame sent I1014 23:45:39.897984 7 log.go:181] (0xc0034364d0) (0xc003d650e0) Stream removed, broadcasting: 1 I1014 23:45:39.897993 7 log.go:181] (0xc0034364d0) Go away received I1014 23:45:39.898178 7 log.go:181] (0xc0034364d0) (0xc003d650e0) Stream removed, broadcasting: 1 I1014 23:45:39.898217 7 log.go:181] (0xc0034364d0) (0xc003d65180) Stream removed, broadcasting: 3 I1014 23:45:39.898251 7 log.go:181] (0xc0034364d0) (0xc0040314a0) Stream removed, broadcasting: 5 Oct 14 23:45:39.898: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:45:39.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9294" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":180,"skipped":2758,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:45:39.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-hw5x STEP: Creating a pod to test atomic-volume-subpath Oct 14 23:45:40.035: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hw5x" in namespace "subpath-6606" to be "Succeeded or Failed" Oct 14 23:45:40.046: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Pending", Reason="", readiness=false. Elapsed: 10.104033ms Oct 14 23:45:42.051: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015615587s Oct 14 23:45:44.056: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 4.020437258s Oct 14 23:45:46.060: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 6.024139011s Oct 14 23:45:48.065: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 8.02901205s Oct 14 23:45:50.069: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 10.033893613s Oct 14 23:45:52.074: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 12.038848789s Oct 14 23:45:54.079: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 14.043292749s Oct 14 23:45:56.083: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 16.047233348s Oct 14 23:45:58.086: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 18.050867078s Oct 14 23:46:00.091: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 20.055506678s Oct 14 23:46:02.096: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Running", Reason="", readiness=true. Elapsed: 22.060233704s Oct 14 23:46:04.099: INFO: Pod "pod-subpath-test-configmap-hw5x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063763168s STEP: Saw pod success Oct 14 23:46:04.099: INFO: Pod "pod-subpath-test-configmap-hw5x" satisfied condition "Succeeded or Failed" Oct 14 23:46:04.103: INFO: Trying to get logs from node leguer-worker pod pod-subpath-test-configmap-hw5x container test-container-subpath-configmap-hw5x: STEP: delete the pod Oct 14 23:46:04.143: INFO: Waiting for pod pod-subpath-test-configmap-hw5x to disappear Oct 14 23:46:04.150: INFO: Pod pod-subpath-test-configmap-hw5x no longer exists STEP: Deleting pod pod-subpath-test-configmap-hw5x Oct 14 23:46:04.150: INFO: Deleting pod "pod-subpath-test-configmap-hw5x" in namespace "subpath-6606" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:04.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6606" for this suite. • [SLOW TEST:24.251 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":181,"skipped":2776,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:04.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-5763/secret-test-ecd07684-8406-42c0-b7ce-89dd406239b9 STEP: Creating a pod to test consume secrets Oct 14 23:46:04.220: INFO: Waiting up to 5m0s for pod "pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659" in namespace "secrets-5763" to be "Succeeded or Failed" Oct 14 23:46:04.224: INFO: Pod "pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659": Phase="Pending", Reason="", readiness=false. Elapsed: 3.317084ms Oct 14 23:46:06.228: INFO: Pod "pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007624736s Oct 14 23:46:08.233: INFO: Pod "pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659": Phase="Running", Reason="", readiness=true. Elapsed: 4.012493856s Oct 14 23:46:10.237: INFO: Pod "pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016681943s STEP: Saw pod success Oct 14 23:46:10.237: INFO: Pod "pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659" satisfied condition "Succeeded or Failed" Oct 14 23:46:10.240: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659 container env-test: STEP: delete the pod Oct 14 23:46:10.316: INFO: Waiting for pod pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659 to disappear Oct 14 23:46:10.319: INFO: Pod pod-configmaps-289e7934-6094-4bd8-a1f8-84fbc9a77659 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:10.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5763" for this suite. • [SLOW TEST:6.167 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":182,"skipped":2790,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:10.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-9add71cf-4557-4a06-845b-6bc9287c9e77 STEP: Creating a pod to test consume configMaps Oct 14 23:46:10.399: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cb3f996e-6cf9-4ea4-b334-2c29987ee686" in namespace "projected-1299" to be "Succeeded or Failed" Oct 14 23:46:10.405: INFO: Pod "pod-projected-configmaps-cb3f996e-6cf9-4ea4-b334-2c29987ee686": Phase="Pending", Reason="", readiness=false. Elapsed: 5.92972ms Oct 14 23:46:12.409: INFO: Pod "pod-projected-configmaps-cb3f996e-6cf9-4ea4-b334-2c29987ee686": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009963495s Oct 14 23:46:14.443: INFO: Pod "pod-projected-configmaps-cb3f996e-6cf9-4ea4-b334-2c29987ee686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043817411s STEP: Saw pod success Oct 14 23:46:14.443: INFO: Pod "pod-projected-configmaps-cb3f996e-6cf9-4ea4-b334-2c29987ee686" satisfied condition "Succeeded or Failed" Oct 14 23:46:14.446: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-cb3f996e-6cf9-4ea4-b334-2c29987ee686 container projected-configmap-volume-test: STEP: delete the pod Oct 14 23:46:14.487: INFO: Waiting for pod pod-projected-configmaps-cb3f996e-6cf9-4ea4-b334-2c29987ee686 to disappear Oct 14 23:46:14.495: INFO: Pod pod-projected-configmaps-cb3f996e-6cf9-4ea4-b334-2c29987ee686 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:14.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1299" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":183,"skipped":2829,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:14.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:46:14.607: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:15.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6129" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":184,"skipped":2838,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:15.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-9fce9e2c-4682-46fb-a3d1-e3acc2cfbc52 STEP: Creating a pod to test consume configMaps Oct 14 23:46:15.523: INFO: Waiting up to 5m0s for pod "pod-configmaps-93ac9524-8423-4ba1-9080-8159ccb6dc13" in namespace "configmap-349" to be "Succeeded or Failed" Oct 14 23:46:15.559: INFO: Pod "pod-configmaps-93ac9524-8423-4ba1-9080-8159ccb6dc13": Phase="Pending", Reason="", readiness=false. Elapsed: 36.52941ms Oct 14 23:46:17.565: INFO: Pod "pod-configmaps-93ac9524-8423-4ba1-9080-8159ccb6dc13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042356427s Oct 14 23:46:19.568: INFO: Pod "pod-configmaps-93ac9524-8423-4ba1-9080-8159ccb6dc13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044736349s STEP: Saw pod success Oct 14 23:46:19.568: INFO: Pod "pod-configmaps-93ac9524-8423-4ba1-9080-8159ccb6dc13" satisfied condition "Succeeded or Failed" Oct 14 23:46:19.570: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-93ac9524-8423-4ba1-9080-8159ccb6dc13 container configmap-volume-test: STEP: delete the pod Oct 14 23:46:19.644: INFO: Waiting for pod pod-configmaps-93ac9524-8423-4ba1-9080-8159ccb6dc13 to disappear Oct 14 23:46:19.650: INFO: Pod pod-configmaps-93ac9524-8423-4ba1-9080-8159ccb6dc13 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:19.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-349" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":185,"skipped":2841,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:19.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-4034f150-5ea2-4d45-acd8-0162b0cebd1a STEP: Creating a pod to test consume configMaps Oct 14 23:46:19.783: INFO: Waiting up to 5m0s for pod "pod-configmaps-f67b3710-236d-4bd5-ad21-ad812af97c5c" in namespace "configmap-1106" to be "Succeeded or Failed" Oct 14 23:46:19.787: INFO: Pod "pod-configmaps-f67b3710-236d-4bd5-ad21-ad812af97c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.992687ms Oct 14 23:46:21.791: INFO: Pod "pod-configmaps-f67b3710-236d-4bd5-ad21-ad812af97c5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007347642s Oct 14 23:46:23.813: INFO: Pod "pod-configmaps-f67b3710-236d-4bd5-ad21-ad812af97c5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029992963s STEP: Saw pod success Oct 14 23:46:23.813: INFO: Pod "pod-configmaps-f67b3710-236d-4bd5-ad21-ad812af97c5c" satisfied condition "Succeeded or Failed" Oct 14 23:46:23.822: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-f67b3710-236d-4bd5-ad21-ad812af97c5c container configmap-volume-test: STEP: delete the pod Oct 14 23:46:23.849: INFO: Waiting for pod pod-configmaps-f67b3710-236d-4bd5-ad21-ad812af97c5c to disappear Oct 14 23:46:23.861: INFO: Pod pod-configmaps-f67b3710-236d-4bd5-ad21-ad812af97c5c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:23.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1106" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":186,"skipped":2852,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:23.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 14 23:46:24.699: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 14 23:46:26.709: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315984, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315984, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315984, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738315984, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:46:29.769: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:46:29.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:31.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7125" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.470 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":187,"skipped":2870,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:31.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 14 23:46:36.036: INFO: Successfully updated pod "labelsupdate1e0bcc22-1d37-4033-854b-b10709708db2" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:40.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9420" for this suite. • [SLOW TEST:8.778 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":2872,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:40.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 14 23:46:40.254: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3256 /api/v1/namespaces/watch-3256/configmaps/e2e-watch-test-label-changed 872062c1-b5f8-45e4-9aaf-548b4a3dc169 2960921 0 2020-10-14 23:46:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 23:46:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 23:46:40.255: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3256 /api/v1/namespaces/watch-3256/configmaps/e2e-watch-test-label-changed 872062c1-b5f8-45e4-9aaf-548b4a3dc169 2960922 0 2020-10-14 23:46:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 23:46:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 23:46:40.255: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3256 /api/v1/namespaces/watch-3256/configmaps/e2e-watch-test-label-changed 872062c1-b5f8-45e4-9aaf-548b4a3dc169 2960923 0 2020-10-14 23:46:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 23:46:40 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 14 23:46:50.397: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3256 /api/v1/namespaces/watch-3256/configmaps/e2e-watch-test-label-changed 872062c1-b5f8-45e4-9aaf-548b4a3dc169 2960966 0 2020-10-14 23:46:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 23:46:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 23:46:50.398: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3256 /api/v1/namespaces/watch-3256/configmaps/e2e-watch-test-label-changed 872062c1-b5f8-45e4-9aaf-548b4a3dc169 2960967 0 2020-10-14 23:46:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 23:46:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 14 23:46:50.398: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3256 /api/v1/namespaces/watch-3256/configmaps/e2e-watch-test-label-changed 872062c1-b5f8-45e4-9aaf-548b4a3dc169 2960968 0 2020-10-14 23:46:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-14 23:46:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:46:50.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3256" for this suite. • [SLOW TEST:10.318 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":189,"skipped":2914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:46:50.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8de5e3b0-4e5f-43e8-b9f0-acdb0a1991dc STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8de5e3b0-4e5f-43e8-b9f0-acdb0a1991dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:48:19.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6976" for this suite. • [SLOW TEST:88.720 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":190,"skipped":2964,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:48:19.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:48:19.429: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 14 23:48:19.445: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 14 23:48:24.503: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 14 23:48:24.503: INFO: Creating deployment "test-rolling-update-deployment" Oct 14 23:48:24.546: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 14 23:48:24.579: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Oct 14 23:48:26.607: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 14 23:48:26.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316104, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316104, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316104, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316104, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 23:48:28.624: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 14 23:48:28.711: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8067 /apis/apps/v1/namespaces/deployment-8067/deployments/test-rolling-update-deployment e3002c6d-44a7-452e-b70f-48d3a42d9835 2961331 1 2020-10-14 23:48:24 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-10-14 23:48:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 23:48:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003927818 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-14 23:48:24 +0000 UTC,LastTransitionTime:2020-10-14 23:48:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-10-14 23:48:28 +0000 UTC,LastTransitionTime:2020-10-14 23:48:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 14 23:48:28.715: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-8067 /apis/apps/v1/namespaces/deployment-8067/replicasets/test-rolling-update-deployment-c4cb8d6d9 dc7270e1-6a19-4526-9c9d-14be0f1ca0b7 2961319 1 2020-10-14 23:48:24 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment e3002c6d-44a7-452e-b70f-48d3a42d9835 0xc003927d70 0xc003927d71}] [] [{kube-controller-manager Update apps/v1 2020-10-14 23:48:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3002c6d-44a7-452e-b70f-48d3a42d9835\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003927de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 14 23:48:28.715: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 14 23:48:28.715: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8067 /apis/apps/v1/namespaces/deployment-8067/replicasets/test-rolling-update-controller 8e02c288-35e4-48c4-9ecd-91592ae96db1 2961330 2 2020-10-14 23:48:19 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment e3002c6d-44a7-452e-b70f-48d3a42d9835 0xc003927c47 0xc003927c48}] [] [{e2e.test Update apps/v1 2020-10-14 23:48:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-14 23:48:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e3002c6d-44a7-452e-b70f-48d3a42d9835\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003927d08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 14 23:48:28.718: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-5z9hr" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-5z9hr test-rolling-update-deployment-c4cb8d6d9- deployment-8067 /api/v1/namespaces/deployment-8067/pods/test-rolling-update-deployment-c4cb8d6d9-5z9hr 6396303d-d1b3-4501-b6bb-2becaca88a07 2961318 0 2020-10-14 23:48:24 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 dc7270e1-6a19-4526-9c9d-14be0f1ca0b7 0xc004a54290 0xc004a54291}] [] [{kube-controller-manager Update v1 2020-10-14 23:48:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"dc7270e1-6a19-4526-9c9d-14be0f1ca0b7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-14 23:48:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-kjrdd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-kjrdd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-kjrdd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:48:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:48:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:48:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-14 23:48:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.74,StartTime:2020-10-14 23:48:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-14 23:48:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://0725f401128a76b7c2c56b54694464fec347bd3dd2a2c5be56b03b97fd14b8c7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:48:28.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8067" for this suite. • [SLOW TEST:9.592 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":191,"skipped":2981,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:48:28.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:48:29.027: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2dc04c8c-b8d3-4815-b247-6bbbe3969444" in namespace "projected-2888" to be "Succeeded or Failed" Oct 14 23:48:29.063: INFO: Pod "downwardapi-volume-2dc04c8c-b8d3-4815-b247-6bbbe3969444": Phase="Pending", Reason="", readiness=false. Elapsed: 35.787699ms Oct 14 23:48:31.177: INFO: Pod "downwardapi-volume-2dc04c8c-b8d3-4815-b247-6bbbe3969444": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150435156s Oct 14 23:48:33.182: INFO: Pod "downwardapi-volume-2dc04c8c-b8d3-4815-b247-6bbbe3969444": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154616932s STEP: Saw pod success Oct 14 23:48:33.182: INFO: Pod "downwardapi-volume-2dc04c8c-b8d3-4815-b247-6bbbe3969444" satisfied condition "Succeeded or Failed" Oct 14 23:48:33.185: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-2dc04c8c-b8d3-4815-b247-6bbbe3969444 container client-container: STEP: delete the pod Oct 14 23:48:33.274: INFO: Waiting for pod downwardapi-volume-2dc04c8c-b8d3-4815-b247-6bbbe3969444 to disappear Oct 14 23:48:33.307: INFO: Pod downwardapi-volume-2dc04c8c-b8d3-4815-b247-6bbbe3969444 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:48:33.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2888" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":192,"skipped":2983,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:48:33.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:48:33.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7877" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":193,"skipped":2989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:48:33.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 14 23:48:38.105: INFO: Successfully updated pod "pod-update-d98245ef-2475-481f-ae78-e31e6cb8923b" STEP: verifying the updated pod is in kubernetes Oct 14 23:48:38.154: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:48:38.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4944" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":194,"skipped":3018,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:48:38.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Oct 14 23:48:38.249: INFO: Waiting up to 5m0s for pod "client-containers-c2485aac-6a93-4fd5-8c68-1686c9cecd33" in namespace "containers-6032" to be "Succeeded or Failed" Oct 14 23:48:38.300: INFO: Pod "client-containers-c2485aac-6a93-4fd5-8c68-1686c9cecd33": Phase="Pending", Reason="", readiness=false. Elapsed: 50.355552ms Oct 14 23:48:40.384: INFO: Pod "client-containers-c2485aac-6a93-4fd5-8c68-1686c9cecd33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134154793s Oct 14 23:48:42.388: INFO: Pod "client-containers-c2485aac-6a93-4fd5-8c68-1686c9cecd33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138485595s STEP: Saw pod success Oct 14 23:48:42.388: INFO: Pod "client-containers-c2485aac-6a93-4fd5-8c68-1686c9cecd33" satisfied condition "Succeeded or Failed" Oct 14 23:48:42.391: INFO: Trying to get logs from node leguer-worker pod client-containers-c2485aac-6a93-4fd5-8c68-1686c9cecd33 container test-container: STEP: delete the pod Oct 14 23:48:42.445: INFO: Waiting for pod client-containers-c2485aac-6a93-4fd5-8c68-1686c9cecd33 to disappear Oct 14 23:48:42.452: INFO: Pod client-containers-c2485aac-6a93-4fd5-8c68-1686c9cecd33 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:48:42.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6032" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":3031,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:48:42.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:48:55.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7323" for this suite. • [SLOW TEST:13.188 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":196,"skipped":3031,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:48:55.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:49:06.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3242" for this suite. • [SLOW TEST:11.149 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":197,"skipped":3061,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:49:06.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3051 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3051 I1014 23:49:07.011487 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3051, replica count: 2 I1014 23:49:10.061981 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1014 23:49:13.062234 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 14 23:49:13.062: INFO: Creating new exec pod Oct 14 23:49:18.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3051 execpod84rj8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 14 23:49:18.342: INFO: stderr: "I1014 23:49:18.237478 2022 log.go:181] (0xc00073efd0) (0xc0002359a0) Create stream\nI1014 23:49:18.237539 2022 log.go:181] (0xc00073efd0) (0xc0002359a0) Stream added, broadcasting: 1\nI1014 23:49:18.243230 2022 log.go:181] (0xc00073efd0) Reply frame received for 1\nI1014 23:49:18.243271 2022 log.go:181] (0xc00073efd0) (0xc000c260a0) Create stream\nI1014 23:49:18.243282 2022 log.go:181] (0xc00073efd0) (0xc000c260a0) Stream added, broadcasting: 3\nI1014 23:49:18.244186 2022 log.go:181] (0xc00073efd0) Reply frame received for 3\nI1014 23:49:18.244215 2022 log.go:181] (0xc00073efd0) (0xc0002340a0) Create stream\nI1014 23:49:18.244224 2022 log.go:181] (0xc00073efd0) (0xc0002340a0) Stream added, broadcasting: 5\nI1014 23:49:18.245334 2022 log.go:181] (0xc00073efd0) Reply frame received for 5\nI1014 23:49:18.332208 2022 log.go:181] (0xc00073efd0) Data frame received for 5\nI1014 23:49:18.332243 2022 log.go:181] (0xc0002340a0) (5) Data frame handling\nI1014 23:49:18.332272 2022 log.go:181] (0xc0002340a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1014 23:49:18.333417 2022 log.go:181] (0xc00073efd0) Data frame received for 5\nI1014 23:49:18.333453 2022 log.go:181] (0xc0002340a0) (5) Data frame handling\nI1014 23:49:18.333483 2022 log.go:181] (0xc0002340a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1014 23:49:18.333814 2022 log.go:181] (0xc00073efd0) Data frame received for 5\nI1014 23:49:18.333841 2022 log.go:181] (0xc0002340a0) (5) Data frame handling\nI1014 23:49:18.334025 2022 log.go:181] (0xc00073efd0) Data frame received for 3\nI1014 23:49:18.334050 2022 log.go:181] (0xc000c260a0) (3) Data frame handling\nI1014 23:49:18.335809 2022 log.go:181] (0xc00073efd0) Data frame received for 1\nI1014 23:49:18.335834 2022 log.go:181] (0xc0002359a0) (1) Data frame handling\nI1014 23:49:18.335857 2022 log.go:181] (0xc0002359a0) (1) Data frame sent\nI1014 23:49:18.335878 2022 log.go:181] (0xc00073efd0) (0xc0002359a0) Stream removed, broadcasting: 1\nI1014 23:49:18.335899 2022 log.go:181] (0xc00073efd0) Go away received\nI1014 23:49:18.336261 2022 log.go:181] (0xc00073efd0) (0xc0002359a0) Stream removed, broadcasting: 1\nI1014 23:49:18.336279 2022 log.go:181] (0xc00073efd0) (0xc000c260a0) Stream removed, broadcasting: 3\nI1014 23:49:18.336288 2022 log.go:181] (0xc00073efd0) (0xc0002340a0) Stream removed, broadcasting: 5\n" Oct 14 23:49:18.342: INFO: stdout: "" Oct 14 23:49:18.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3051 execpod84rj8 -- /bin/sh -x -c nc -zv -t -w 2 10.108.218.51 80' Oct 14 23:49:18.559: INFO: stderr: "I1014 23:49:18.482600 2040 log.go:181] (0xc000ae4dc0) (0xc00019e1e0) Create stream\nI1014 23:49:18.482673 2040 log.go:181] (0xc000ae4dc0) (0xc00019e1e0) Stream added, broadcasting: 1\nI1014 23:49:18.488131 2040 log.go:181] (0xc000ae4dc0) Reply frame received for 1\nI1014 23:49:18.488167 2040 log.go:181] (0xc000ae4dc0) (0xc000578000) Create stream\nI1014 23:49:18.488175 2040 log.go:181] (0xc000ae4dc0) (0xc000578000) Stream added, broadcasting: 3\nI1014 23:49:18.489207 2040 log.go:181] (0xc000ae4dc0) Reply frame received for 3\nI1014 23:49:18.489236 2040 log.go:181] (0xc000ae4dc0) (0xc0005780a0) Create stream\nI1014 23:49:18.489245 2040 log.go:181] (0xc000ae4dc0) (0xc0005780a0) Stream added, broadcasting: 5\nI1014 23:49:18.490298 2040 log.go:181] (0xc000ae4dc0) Reply frame received for 5\nI1014 23:49:18.550083 2040 log.go:181] (0xc000ae4dc0) Data frame received for 5\nI1014 23:49:18.550103 2040 log.go:181] (0xc0005780a0) (5) Data frame handling\nI1014 23:49:18.550111 2040 log.go:181] (0xc0005780a0) (5) Data frame sent\nI1014 23:49:18.550116 2040 log.go:181] (0xc000ae4dc0) Data frame received for 5\nI1014 23:49:18.550120 2040 log.go:181] (0xc0005780a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.218.51 80\nConnection to 10.108.218.51 80 port [tcp/http] succeeded!\nI1014 23:49:18.550245 2040 log.go:181] (0xc000ae4dc0) Data frame received for 3\nI1014 23:49:18.550281 2040 log.go:181] (0xc000578000) (3) Data frame handling\nI1014 23:49:18.552160 2040 log.go:181] (0xc000ae4dc0) Data frame received for 1\nI1014 23:49:18.552184 2040 log.go:181] (0xc00019e1e0) (1) Data frame handling\nI1014 23:49:18.552197 2040 log.go:181] (0xc00019e1e0) (1) Data frame sent\nI1014 23:49:18.552216 2040 log.go:181] (0xc000ae4dc0) (0xc00019e1e0) Stream removed, broadcasting: 1\nI1014 23:49:18.552239 2040 log.go:181] (0xc000ae4dc0) Go away received\nI1014 23:49:18.552947 2040 log.go:181] (0xc000ae4dc0) (0xc00019e1e0) Stream removed, broadcasting: 1\nI1014 23:49:18.552979 2040 log.go:181] (0xc000ae4dc0) (0xc000578000) Stream removed, broadcasting: 3\nI1014 23:49:18.552992 2040 log.go:181] (0xc000ae4dc0) (0xc0005780a0) Stream removed, broadcasting: 5\n" Oct 14 23:49:18.559: INFO: stdout: "" Oct 14 23:49:18.559: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3051 execpod84rj8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 32612' Oct 14 23:49:18.764: INFO: stderr: "I1014 23:49:18.690468 2058 log.go:181] (0xc0008bd550) (0xc0008b2aa0) Create stream\nI1014 23:49:18.690511 2058 log.go:181] (0xc0008bd550) (0xc0008b2aa0) Stream added, broadcasting: 1\nI1014 23:49:18.695691 2058 log.go:181] (0xc0008bd550) Reply frame received for 1\nI1014 23:49:18.695721 2058 log.go:181] (0xc0008bd550) (0xc000c080a0) Create stream\nI1014 23:49:18.695729 2058 log.go:181] (0xc0008bd550) (0xc000c080a0) Stream added, broadcasting: 3\nI1014 23:49:18.696558 2058 log.go:181] (0xc0008bd550) Reply frame received for 3\nI1014 23:49:18.696591 2058 log.go:181] (0xc0008bd550) (0xc0008b2000) Create stream\nI1014 23:49:18.696602 2058 log.go:181] (0xc0008bd550) (0xc0008b2000) Stream added, broadcasting: 5\nI1014 23:49:18.697577 2058 log.go:181] (0xc0008bd550) Reply frame received for 5\nI1014 23:49:18.757625 2058 log.go:181] (0xc0008bd550) Data frame received for 5\nI1014 23:49:18.757653 2058 log.go:181] (0xc0008b2000) (5) Data frame handling\nI1014 23:49:18.757667 2058 log.go:181] (0xc0008b2000) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.18 32612\nConnection to 172.18.0.18 32612 port [tcp/32612] succeeded!\nI1014 23:49:18.757934 2058 log.go:181] (0xc0008bd550) Data frame received for 3\nI1014 23:49:18.757960 2058 log.go:181] (0xc000c080a0) (3) Data frame handling\nI1014 23:49:18.757981 2058 log.go:181] (0xc0008bd550) Data frame received for 5\nI1014 23:49:18.757991 2058 log.go:181] (0xc0008b2000) (5) Data frame handling\nI1014 23:49:18.759600 2058 log.go:181] (0xc0008bd550) Data frame received for 1\nI1014 23:49:18.759633 2058 log.go:181] (0xc0008b2aa0) (1) Data frame handling\nI1014 23:49:18.759667 2058 log.go:181] (0xc0008b2aa0) (1) Data frame sent\nI1014 23:49:18.759696 2058 log.go:181] (0xc0008bd550) (0xc0008b2aa0) Stream removed, broadcasting: 1\nI1014 23:49:18.759727 2058 log.go:181] (0xc0008bd550) Go away received\nI1014 23:49:18.760021 2058 log.go:181] (0xc0008bd550) (0xc0008b2aa0) Stream removed, broadcasting: 1\nI1014 23:49:18.760038 2058 log.go:181] (0xc0008bd550) (0xc000c080a0) Stream removed, broadcasting: 3\nI1014 23:49:18.760046 2058 log.go:181] (0xc0008bd550) (0xc0008b2000) Stream removed, broadcasting: 5\n" Oct 14 23:49:18.764: INFO: stdout: "" Oct 14 23:49:18.764: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3051 execpod84rj8 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 32612' Oct 14 23:49:18.999: INFO: stderr: "I1014 23:49:18.910391 2076 log.go:181] (0xc000d211e0) (0xc0000ff9a0) Create stream\nI1014 23:49:18.910447 2076 log.go:181] (0xc000d211e0) (0xc0000ff9a0) Stream added, broadcasting: 1\nI1014 23:49:18.915551 2076 log.go:181] (0xc000d211e0) Reply frame received for 1\nI1014 23:49:18.915587 2076 log.go:181] (0xc000d211e0) (0xc0000fe1e0) Create stream\nI1014 23:49:18.915605 2076 log.go:181] (0xc000d211e0) (0xc0000fe1e0) Stream added, broadcasting: 3\nI1014 23:49:18.916576 2076 log.go:181] (0xc000d211e0) Reply frame received for 3\nI1014 23:49:18.916629 2076 log.go:181] (0xc000d211e0) (0xc0000fe960) Create stream\nI1014 23:49:18.916644 2076 log.go:181] (0xc000d211e0) (0xc0000fe960) Stream added, broadcasting: 5\nI1014 23:49:18.917625 2076 log.go:181] (0xc000d211e0) Reply frame received for 5\nI1014 23:49:18.990784 2076 log.go:181] (0xc000d211e0) Data frame received for 3\nI1014 23:49:18.990836 2076 log.go:181] (0xc0000fe1e0) (3) Data frame handling\nI1014 23:49:18.990915 2076 log.go:181] (0xc000d211e0) Data frame received for 5\nI1014 23:49:18.990989 2076 log.go:181] (0xc0000fe960) (5) Data frame handling\nI1014 23:49:18.991025 2076 log.go:181] (0xc0000fe960) (5) Data frame sent\nI1014 23:49:18.991054 2076 log.go:181] (0xc000d211e0) Data frame received for 5\nI1014 23:49:18.991073 2076 log.go:181] (0xc0000fe960) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.17 32612\nConnection to 172.18.0.17 32612 port [tcp/32612] succeeded!\nI1014 23:49:18.992773 2076 log.go:181] (0xc000d211e0) Data frame received for 1\nI1014 23:49:18.992803 2076 log.go:181] (0xc0000ff9a0) (1) Data frame handling\nI1014 23:49:18.992826 2076 log.go:181] (0xc0000ff9a0) (1) Data frame sent\nI1014 23:49:18.992969 2076 log.go:181] (0xc000d211e0) (0xc0000ff9a0) Stream removed, broadcasting: 1\nI1014 23:49:18.993020 2076 log.go:181] (0xc000d211e0) Go away received\nI1014 23:49:18.993602 2076 log.go:181] (0xc000d211e0) (0xc0000ff9a0) Stream removed, broadcasting: 1\nI1014 23:49:18.993630 2076 log.go:181] (0xc000d211e0) (0xc0000fe1e0) Stream removed, broadcasting: 3\nI1014 23:49:18.993651 2076 log.go:181] (0xc000d211e0) (0xc0000fe960) Stream removed, broadcasting: 5\n" Oct 14 23:49:18.999: INFO: stdout: "" Oct 14 23:49:18.999: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:49:19.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3051" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:12.246 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":198,"skipped":3064,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:49:19.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 14 23:49:19.130: INFO: Waiting up to 5m0s for pod "pod-b10e78d1-3e5b-4595-8c6f-11e6666af870" in namespace "emptydir-6909" to be "Succeeded or Failed" Oct 14 23:49:19.156: INFO: Pod "pod-b10e78d1-3e5b-4595-8c6f-11e6666af870": Phase="Pending", Reason="", readiness=false. Elapsed: 26.654786ms Oct 14 23:49:21.161: INFO: Pod "pod-b10e78d1-3e5b-4595-8c6f-11e6666af870": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030969165s Oct 14 23:49:23.164: INFO: Pod "pod-b10e78d1-3e5b-4595-8c6f-11e6666af870": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034071861s STEP: Saw pod success Oct 14 23:49:23.164: INFO: Pod "pod-b10e78d1-3e5b-4595-8c6f-11e6666af870" satisfied condition "Succeeded or Failed" Oct 14 23:49:23.186: INFO: Trying to get logs from node leguer-worker2 pod pod-b10e78d1-3e5b-4595-8c6f-11e6666af870 container test-container: STEP: delete the pod Oct 14 23:49:23.224: INFO: Waiting for pod pod-b10e78d1-3e5b-4595-8c6f-11e6666af870 to disappear Oct 14 23:49:23.244: INFO: Pod pod-b10e78d1-3e5b-4595-8c6f-11e6666af870 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:49:23.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6909" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":199,"skipped":3085,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:49:23.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:49:24.221: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:49:26.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316164, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316164, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316164, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316164, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 23:49:28.422: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316164, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316164, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316164, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316164, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:49:31.446: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:49:31.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6732" for this suite. STEP: Destroying namespace "webhook-6732-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.483 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":200,"skipped":3119,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:49:31.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 23:49:31.807: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 23:49:31.817: INFO: Waiting for terminating namespaces to be deleted... Oct 14 23:49:31.821: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Oct 14 23:49:31.824: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:49:31.824: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 23:49:31.824: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:49:31.824: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 23:49:31.824: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Oct 14 23:49:31.828: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:49:31.828: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 23:49:31.828: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Oct 14 23:49:31.828: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-483f8dd6-a8d6-438f-8ed8-211142b087d0 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-483f8dd6-a8d6-438f-8ed8-211142b087d0 off the node leguer-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-483f8dd6-a8d6-438f-8ed8-211142b087d0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:49:48.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1792" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.357 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":201,"skipped":3131,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:49:48.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:49:59.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7296" for this suite. • [SLOW TEST:11.536 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":202,"skipped":3147,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:49:59.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-9b861be8-cb63-4ab3-b654-95804534591d STEP: Creating secret with name s-test-opt-upd-ffa1023e-0969-4a0a-b278-9cfb619fbc86 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-9b861be8-cb63-4ab3-b654-95804534591d STEP: Updating secret s-test-opt-upd-ffa1023e-0969-4a0a-b278-9cfb619fbc86 STEP: Creating secret with name s-test-opt-create-3fbe06c5-8007-4891-8f56-524cc7fef49e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:50:07.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-138" for this suite. • [SLOW TEST:8.288 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":203,"skipped":3153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:50:07.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-ea96afd9-0903-47da-9e18-a50a8ad71eaf STEP: Creating a pod to test consume configMaps Oct 14 23:50:07.996: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcf91c7c-4982-47a3-b33e-f719894b4604" in namespace "configmap-3355" to be "Succeeded or Failed" Oct 14 23:50:08.018: INFO: Pod "pod-configmaps-dcf91c7c-4982-47a3-b33e-f719894b4604": Phase="Pending", Reason="", readiness=false. Elapsed: 21.793695ms Oct 14 23:50:10.022: INFO: Pod "pod-configmaps-dcf91c7c-4982-47a3-b33e-f719894b4604": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025838813s Oct 14 23:50:12.026: INFO: Pod "pod-configmaps-dcf91c7c-4982-47a3-b33e-f719894b4604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030006463s STEP: Saw pod success Oct 14 23:50:12.026: INFO: Pod "pod-configmaps-dcf91c7c-4982-47a3-b33e-f719894b4604" satisfied condition "Succeeded or Failed" Oct 14 23:50:12.031: INFO: Trying to get logs from node leguer-worker2 pod pod-configmaps-dcf91c7c-4982-47a3-b33e-f719894b4604 container configmap-volume-test: STEP: delete the pod Oct 14 23:50:12.107: INFO: Waiting for pod pod-configmaps-dcf91c7c-4982-47a3-b33e-f719894b4604 to disappear Oct 14 23:50:12.112: INFO: Pod pod-configmaps-dcf91c7c-4982-47a3-b33e-f719894b4604 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:50:12.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3355" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":204,"skipped":3179,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:50:12.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Oct 14 23:50:12.161: INFO: Major version: 1 STEP: Confirm minor version Oct 14 23:50:12.161: INFO: cleanMinorVersion: 19 Oct 14 23:50:12.161: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:50:12.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-6207" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":205,"skipped":3194,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:50:12.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-00212184-5cb1-465d-ac52-eb4e4418e3e2 STEP: Creating a pod to test consume secrets Oct 14 23:50:12.320: INFO: Waiting up to 5m0s for pod "pod-secrets-2c09d854-1b51-4276-845a-70469213bb74" in namespace "secrets-1981" to be "Succeeded or Failed" Oct 14 23:50:12.322: INFO: Pod "pod-secrets-2c09d854-1b51-4276-845a-70469213bb74": Phase="Pending", Reason="", readiness=false. Elapsed: 1.71881ms Oct 14 23:50:14.440: INFO: Pod "pod-secrets-2c09d854-1b51-4276-845a-70469213bb74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119828891s Oct 14 23:50:16.445: INFO: Pod "pod-secrets-2c09d854-1b51-4276-845a-70469213bb74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.12472028s STEP: Saw pod success Oct 14 23:50:16.445: INFO: Pod "pod-secrets-2c09d854-1b51-4276-845a-70469213bb74" satisfied condition "Succeeded or Failed" Oct 14 23:50:16.448: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-2c09d854-1b51-4276-845a-70469213bb74 container secret-volume-test: STEP: delete the pod Oct 14 23:50:16.720: INFO: Waiting for pod pod-secrets-2c09d854-1b51-4276-845a-70469213bb74 to disappear Oct 14 23:50:16.836: INFO: Pod pod-secrets-2c09d854-1b51-4276-845a-70469213bb74 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:50:16.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1981" for this suite. STEP: Destroying namespace "secret-namespace-4638" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":206,"skipped":3209,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:50:16.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9174 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-9174 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9174 Oct 14 23:50:16.996: INFO: Found 0 stateful pods, waiting for 1 Oct 14 23:50:27.001: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 14 23:50:27.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:50:27.285: INFO: stderr: "I1014 23:50:27.144167 2095 log.go:181] (0xc000142370) (0xc000f180a0) Create stream\nI1014 23:50:27.144222 2095 log.go:181] (0xc000142370) (0xc000f180a0) Stream added, broadcasting: 1\nI1014 23:50:27.146221 2095 log.go:181] (0xc000142370) Reply frame received for 1\nI1014 23:50:27.146250 2095 log.go:181] (0xc000142370) (0xc000f18140) Create stream\nI1014 23:50:27.146257 2095 log.go:181] (0xc000142370) (0xc000f18140) Stream added, broadcasting: 3\nI1014 23:50:27.147097 2095 log.go:181] (0xc000142370) Reply frame received for 3\nI1014 23:50:27.147128 2095 log.go:181] (0xc000142370) (0xc000f18280) Create stream\nI1014 23:50:27.147139 2095 log.go:181] (0xc000142370) (0xc000f18280) Stream added, broadcasting: 5\nI1014 23:50:27.148175 2095 log.go:181] (0xc000142370) Reply frame received for 5\nI1014 23:50:27.241330 2095 log.go:181] (0xc000142370) Data frame received for 5\nI1014 23:50:27.241355 2095 log.go:181] (0xc000f18280) (5) Data frame handling\nI1014 23:50:27.241369 2095 log.go:181] (0xc000f18280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:50:27.273981 2095 log.go:181] (0xc000142370) Data frame received for 3\nI1014 23:50:27.274007 2095 log.go:181] (0xc000f18140) (3) Data frame handling\nI1014 23:50:27.274021 2095 log.go:181] (0xc000f18140) (3) Data frame sent\nI1014 23:50:27.274032 2095 log.go:181] (0xc000142370) Data frame received for 3\nI1014 23:50:27.274042 2095 log.go:181] (0xc000f18140) (3) Data frame handling\nI1014 23:50:27.274316 2095 log.go:181] (0xc000142370) Data frame received for 5\nI1014 23:50:27.274344 2095 log.go:181] (0xc000f18280) (5) Data frame handling\nI1014 23:50:27.276334 2095 log.go:181] (0xc000142370) Data frame received for 1\nI1014 23:50:27.276372 2095 log.go:181] (0xc000f180a0) (1) Data frame handling\nI1014 23:50:27.276403 2095 log.go:181] (0xc000f180a0) (1) Data frame sent\nI1014 23:50:27.276435 2095 log.go:181] (0xc000142370) (0xc000f180a0) Stream removed, broadcasting: 1\nI1014 23:50:27.276466 2095 log.go:181] (0xc000142370) Go away received\nI1014 23:50:27.277137 2095 log.go:181] (0xc000142370) (0xc000f180a0) Stream removed, broadcasting: 1\nI1014 23:50:27.277160 2095 log.go:181] (0xc000142370) (0xc000f18140) Stream removed, broadcasting: 3\nI1014 23:50:27.277172 2095 log.go:181] (0xc000142370) (0xc000f18280) Stream removed, broadcasting: 5\n" Oct 14 23:50:27.285: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:50:27.285: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:50:27.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 14 23:50:37.293: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 14 23:50:37.293: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:50:37.318: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:50:37.318: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC }] Oct 14 23:50:37.318: INFO: Oct 14 23:50:37.318: INFO: StatefulSet ss has not reached scale 3, at 1 Oct 14 23:50:38.324: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.985284924s Oct 14 23:50:39.543: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.979846637s Oct 14 23:50:40.548: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.761144706s Oct 14 23:50:41.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.755531391s Oct 14 23:50:42.576: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.73764321s Oct 14 23:50:43.599: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.728060863s Oct 14 23:50:44.605: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.704381387s Oct 14 23:50:45.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.69880646s Oct 14 23:50:46.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 694.247425ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9174 Oct 14 23:50:47.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:50:47.859: INFO: stderr: "I1014 23:50:47.767842 2114 log.go:181] (0xc00021e000) (0xc000d3a1e0) Create stream\nI1014 23:50:47.767897 2114 log.go:181] (0xc00021e000) (0xc000d3a1e0) Stream added, broadcasting: 1\nI1014 23:50:47.769840 2114 log.go:181] (0xc00021e000) Reply frame received for 1\nI1014 23:50:47.769879 2114 log.go:181] (0xc00021e000) (0xc000459540) Create stream\nI1014 23:50:47.769898 2114 log.go:181] (0xc00021e000) (0xc000459540) Stream added, broadcasting: 3\nI1014 23:50:47.770748 2114 log.go:181] (0xc00021e000) Reply frame received for 3\nI1014 23:50:47.770800 2114 log.go:181] (0xc00021e000) (0xc000d3a280) Create stream\nI1014 23:50:47.770814 2114 log.go:181] (0xc00021e000) (0xc000d3a280) Stream added, broadcasting: 5\nI1014 23:50:47.771676 2114 log.go:181] (0xc00021e000) Reply frame received for 5\nI1014 23:50:47.853268 2114 log.go:181] (0xc00021e000) Data frame received for 5\nI1014 23:50:47.853300 2114 log.go:181] (0xc000d3a280) (5) Data frame handling\nI1014 23:50:47.853315 2114 log.go:181] (0xc000d3a280) (5) Data frame sent\nI1014 23:50:47.853325 2114 log.go:181] (0xc00021e000) Data frame received for 5\nI1014 23:50:47.853335 2114 log.go:181] (0xc000d3a280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1014 23:50:47.853384 2114 log.go:181] (0xc00021e000) Data frame received for 3\nI1014 23:50:47.853435 2114 log.go:181] (0xc000459540) (3) Data frame handling\nI1014 23:50:47.853471 2114 log.go:181] (0xc000459540) (3) Data frame sent\nI1014 23:50:47.853512 2114 log.go:181] (0xc00021e000) Data frame received for 3\nI1014 23:50:47.853537 2114 log.go:181] (0xc000459540) (3) Data frame handling\nI1014 23:50:47.854827 2114 log.go:181] (0xc00021e000) Data frame received for 1\nI1014 23:50:47.854841 2114 log.go:181] (0xc000d3a1e0) (1) Data frame handling\nI1014 23:50:47.854850 2114 log.go:181] (0xc000d3a1e0) (1) Data frame sent\nI1014 23:50:47.854860 2114 log.go:181] (0xc00021e000) (0xc000d3a1e0) Stream removed, broadcasting: 1\nI1014 23:50:47.854894 2114 log.go:181] (0xc00021e000) Go away received\nI1014 23:50:47.855161 2114 log.go:181] (0xc00021e000) (0xc000d3a1e0) Stream removed, broadcasting: 1\nI1014 23:50:47.855180 2114 log.go:181] (0xc00021e000) (0xc000459540) Stream removed, broadcasting: 3\nI1014 23:50:47.855189 2114 log.go:181] (0xc00021e000) (0xc000d3a280) Stream removed, broadcasting: 5\n" Oct 14 23:50:47.859: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:50:47.859: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:50:47.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:50:48.089: INFO: stderr: "I1014 23:50:48.008541 2132 log.go:181] (0xc000b37340) (0xc000b2e960) Create stream\nI1014 23:50:48.009340 2132 log.go:181] (0xc000b37340) (0xc000b2e960) Stream added, broadcasting: 1\nI1014 23:50:48.014709 2132 log.go:181] (0xc000b37340) Reply frame received for 1\nI1014 23:50:48.014756 2132 log.go:181] (0xc000b37340) (0xc000b2e000) Create stream\nI1014 23:50:48.014770 2132 log.go:181] (0xc000b37340) (0xc000b2e000) Stream added, broadcasting: 3\nI1014 23:50:48.015774 2132 log.go:181] (0xc000b37340) Reply frame received for 3\nI1014 23:50:48.015824 2132 log.go:181] (0xc000b37340) (0xc00099bea0) Create stream\nI1014 23:50:48.015835 2132 log.go:181] (0xc000b37340) (0xc00099bea0) Stream added, broadcasting: 5\nI1014 23:50:48.016774 2132 log.go:181] (0xc000b37340) Reply frame received for 5\nI1014 23:50:48.082613 2132 log.go:181] (0xc000b37340) Data frame received for 5\nI1014 23:50:48.082654 2132 log.go:181] (0xc00099bea0) (5) Data frame handling\nI1014 23:50:48.082669 2132 log.go:181] (0xc00099bea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1014 23:50:48.082691 2132 log.go:181] (0xc000b37340) Data frame received for 3\nI1014 23:50:48.082704 2132 log.go:181] (0xc000b2e000) (3) Data frame handling\nI1014 23:50:48.082716 2132 log.go:181] (0xc000b2e000) (3) Data frame sent\nI1014 23:50:48.082729 2132 log.go:181] (0xc000b37340) Data frame received for 3\nI1014 23:50:48.082740 2132 log.go:181] (0xc000b2e000) (3) Data frame handling\nI1014 23:50:48.082753 2132 log.go:181] (0xc000b37340) Data frame received for 5\nI1014 23:50:48.082765 2132 log.go:181] (0xc00099bea0) (5) Data frame handling\nI1014 23:50:48.084491 2132 log.go:181] (0xc000b37340) Data frame received for 1\nI1014 23:50:48.084524 2132 log.go:181] (0xc000b2e960) (1) Data frame handling\nI1014 23:50:48.084535 2132 log.go:181] (0xc000b2e960) (1) Data frame sent\nI1014 23:50:48.084546 2132 log.go:181] (0xc000b37340) (0xc000b2e960) Stream removed, broadcasting: 1\nI1014 23:50:48.084565 2132 log.go:181] (0xc000b37340) Go away received\nI1014 23:50:48.085283 2132 log.go:181] (0xc000b37340) (0xc000b2e960) Stream removed, broadcasting: 1\nI1014 23:50:48.085308 2132 log.go:181] (0xc000b37340) (0xc000b2e000) Stream removed, broadcasting: 3\nI1014 23:50:48.085321 2132 log.go:181] (0xc000b37340) (0xc00099bea0) Stream removed, broadcasting: 5\n" Oct 14 23:50:48.089: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:50:48.089: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:50:48.089: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:50:48.306: INFO: stderr: "I1014 23:50:48.226968 2150 log.go:181] (0xc0009d9550) (0xc0009d0aa0) Create stream\nI1014 23:50:48.227024 2150 log.go:181] (0xc0009d9550) (0xc0009d0aa0) Stream added, broadcasting: 1\nI1014 23:50:48.232309 2150 log.go:181] (0xc0009d9550) Reply frame received for 1\nI1014 23:50:48.232357 2150 log.go:181] (0xc0009d9550) (0xc0001541e0) Create stream\nI1014 23:50:48.232370 2150 log.go:181] (0xc0009d9550) (0xc0001541e0) Stream added, broadcasting: 3\nI1014 23:50:48.233685 2150 log.go:181] (0xc0009d9550) Reply frame received for 3\nI1014 23:50:48.233734 2150 log.go:181] (0xc0009d9550) (0xc000b10000) Create stream\nI1014 23:50:48.233751 2150 log.go:181] (0xc0009d9550) (0xc000b10000) Stream added, broadcasting: 5\nI1014 23:50:48.234682 2150 log.go:181] (0xc0009d9550) Reply frame received for 5\nI1014 23:50:48.297844 2150 log.go:181] (0xc0009d9550) Data frame received for 5\nI1014 23:50:48.297880 2150 log.go:181] (0xc000b10000) (5) Data frame handling\nI1014 23:50:48.297892 2150 log.go:181] (0xc000b10000) (5) Data frame sent\nI1014 23:50:48.297906 2150 log.go:181] (0xc0009d9550) Data frame received for 5\nI1014 23:50:48.297917 2150 log.go:181] (0xc000b10000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1014 23:50:48.297930 2150 log.go:181] (0xc0009d9550) Data frame received for 3\nI1014 23:50:48.297940 2150 log.go:181] (0xc0001541e0) (3) Data frame handling\nI1014 23:50:48.297952 2150 log.go:181] (0xc0001541e0) (3) Data frame sent\nI1014 23:50:48.297961 2150 log.go:181] (0xc0009d9550) Data frame received for 3\nI1014 23:50:48.297968 2150 log.go:181] (0xc0001541e0) (3) Data frame handling\nI1014 23:50:48.299504 2150 log.go:181] (0xc0009d9550) Data frame received for 1\nI1014 23:50:48.299521 2150 log.go:181] (0xc0009d0aa0) (1) Data frame handling\nI1014 23:50:48.299531 2150 log.go:181] (0xc0009d0aa0) (1) Data frame sent\nI1014 23:50:48.299548 2150 log.go:181] (0xc0009d9550) (0xc0009d0aa0) Stream removed, broadcasting: 1\nI1014 23:50:48.299566 2150 log.go:181] (0xc0009d9550) Go away received\nI1014 23:50:48.299884 2150 log.go:181] (0xc0009d9550) (0xc0009d0aa0) Stream removed, broadcasting: 1\nI1014 23:50:48.299913 2150 log.go:181] (0xc0009d9550) (0xc0001541e0) Stream removed, broadcasting: 3\nI1014 23:50:48.299927 2150 log.go:181] (0xc0009d9550) (0xc000b10000) Stream removed, broadcasting: 5\n" Oct 14 23:50:48.306: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 14 23:50:48.306: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 14 23:50:48.310: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Oct 14 23:50:58.315: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:50:58.315: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 14 23:50:58.315: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 14 23:50:58.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:50:58.554: INFO: stderr: "I1014 23:50:58.448635 2167 log.go:181] (0xc0006bd600) (0xc0005ac8c0) Create stream\nI1014 23:50:58.448701 2167 log.go:181] (0xc0006bd600) (0xc0005ac8c0) Stream added, broadcasting: 1\nI1014 23:50:58.454409 2167 log.go:181] (0xc0006bd600) Reply frame received for 1\nI1014 23:50:58.454439 2167 log.go:181] (0xc0006bd600) (0xc0005ac000) Create stream\nI1014 23:50:58.454449 2167 log.go:181] (0xc0006bd600) (0xc0005ac000) Stream added, broadcasting: 3\nI1014 23:50:58.455508 2167 log.go:181] (0xc0006bd600) Reply frame received for 3\nI1014 23:50:58.455541 2167 log.go:181] (0xc0006bd600) (0xc0003cdea0) Create stream\nI1014 23:50:58.455555 2167 log.go:181] (0xc0006bd600) (0xc0003cdea0) Stream added, broadcasting: 5\nI1014 23:50:58.456584 2167 log.go:181] (0xc0006bd600) Reply frame received for 5\nI1014 23:50:58.547366 2167 log.go:181] (0xc0006bd600) Data frame received for 5\nI1014 23:50:58.547394 2167 log.go:181] (0xc0003cdea0) (5) Data frame handling\nI1014 23:50:58.547403 2167 log.go:181] (0xc0003cdea0) (5) Data frame sent\nI1014 23:50:58.547409 2167 log.go:181] (0xc0006bd600) Data frame received for 5\nI1014 23:50:58.547415 2167 log.go:181] (0xc0003cdea0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:50:58.547440 2167 log.go:181] (0xc0006bd600) Data frame received for 3\nI1014 23:50:58.547470 2167 log.go:181] (0xc0005ac000) (3) Data frame handling\nI1014 23:50:58.547492 2167 log.go:181] (0xc0005ac000) (3) Data frame sent\nI1014 23:50:58.547507 2167 log.go:181] (0xc0006bd600) Data frame received for 3\nI1014 23:50:58.547521 2167 log.go:181] (0xc0005ac000) (3) Data frame handling\nI1014 23:50:58.549328 2167 log.go:181] (0xc0006bd600) Data frame received for 1\nI1014 23:50:58.549346 2167 log.go:181] (0xc0005ac8c0) (1) Data frame handling\nI1014 23:50:58.549365 2167 log.go:181] (0xc0005ac8c0) (1) Data frame sent\nI1014 23:50:58.549378 2167 log.go:181] (0xc0006bd600) (0xc0005ac8c0) Stream removed, broadcasting: 1\nI1014 23:50:58.549412 2167 log.go:181] (0xc0006bd600) Go away received\nI1014 23:50:58.549776 2167 log.go:181] (0xc0006bd600) (0xc0005ac8c0) Stream removed, broadcasting: 1\nI1014 23:50:58.549795 2167 log.go:181] (0xc0006bd600) (0xc0005ac000) Stream removed, broadcasting: 3\nI1014 23:50:58.549804 2167 log.go:181] (0xc0006bd600) (0xc0003cdea0) Stream removed, broadcasting: 5\n" Oct 14 23:50:58.554: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:50:58.554: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:50:58.554: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:50:58.798: INFO: stderr: "I1014 23:50:58.682857 2185 log.go:181] (0xc000d0f3f0) (0xc0006d4780) Create stream\nI1014 23:50:58.682900 2185 log.go:181] (0xc000d0f3f0) (0xc0006d4780) Stream added, broadcasting: 1\nI1014 23:50:58.687975 2185 log.go:181] (0xc000d0f3f0) Reply frame received for 1\nI1014 23:50:58.688019 2185 log.go:181] (0xc000d0f3f0) (0xc0006d4000) Create stream\nI1014 23:50:58.688032 2185 log.go:181] (0xc000d0f3f0) (0xc0006d4000) Stream added, broadcasting: 3\nI1014 23:50:58.690037 2185 log.go:181] (0xc000d0f3f0) Reply frame received for 3\nI1014 23:50:58.690080 2185 log.go:181] (0xc000d0f3f0) (0xc00081e320) Create stream\nI1014 23:50:58.690093 2185 log.go:181] (0xc000d0f3f0) (0xc00081e320) Stream added, broadcasting: 5\nI1014 23:50:58.691214 2185 log.go:181] (0xc000d0f3f0) Reply frame received for 5\nI1014 23:50:58.757830 2185 log.go:181] (0xc000d0f3f0) Data frame received for 5\nI1014 23:50:58.757855 2185 log.go:181] (0xc00081e320) (5) Data frame handling\nI1014 23:50:58.757868 2185 log.go:181] (0xc00081e320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:50:58.790902 2185 log.go:181] (0xc000d0f3f0) Data frame received for 3\nI1014 23:50:58.790927 2185 log.go:181] (0xc0006d4000) (3) Data frame handling\nI1014 23:50:58.790940 2185 log.go:181] (0xc0006d4000) (3) Data frame sent\nI1014 23:50:58.791115 2185 log.go:181] (0xc000d0f3f0) Data frame received for 3\nI1014 23:50:58.791133 2185 log.go:181] (0xc0006d4000) (3) Data frame handling\nI1014 23:50:58.791223 2185 log.go:181] (0xc000d0f3f0) Data frame received for 5\nI1014 23:50:58.791264 2185 log.go:181] (0xc00081e320) (5) Data frame handling\nI1014 23:50:58.793607 2185 log.go:181] (0xc000d0f3f0) Data frame received for 1\nI1014 23:50:58.793627 2185 log.go:181] (0xc0006d4780) (1) Data frame handling\nI1014 23:50:58.793643 2185 log.go:181] (0xc0006d4780) (1) Data frame sent\nI1014 23:50:58.793655 2185 log.go:181] (0xc000d0f3f0) (0xc0006d4780) Stream removed, broadcasting: 1\nI1014 23:50:58.793845 2185 log.go:181] (0xc000d0f3f0) Go away received\nI1014 23:50:58.793958 2185 log.go:181] (0xc000d0f3f0) (0xc0006d4780) Stream removed, broadcasting: 1\nI1014 23:50:58.793970 2185 log.go:181] (0xc000d0f3f0) (0xc0006d4000) Stream removed, broadcasting: 3\nI1014 23:50:58.793976 2185 log.go:181] (0xc000d0f3f0) (0xc00081e320) Stream removed, broadcasting: 5\n" Oct 14 23:50:58.798: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:50:58.798: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:50:58.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 14 23:50:59.056: INFO: stderr: "I1014 23:50:58.939539 2202 log.go:181] (0xc000e4f080) (0xc00002a8c0) Create stream\nI1014 23:50:58.939593 2202 log.go:181] (0xc000e4f080) (0xc00002a8c0) Stream added, broadcasting: 1\nI1014 23:50:58.945646 2202 log.go:181] (0xc000e4f080) Reply frame received for 1\nI1014 23:50:58.945760 2202 log.go:181] (0xc000e4f080) (0xc00002b180) Create stream\nI1014 23:50:58.945782 2202 log.go:181] (0xc000e4f080) (0xc00002b180) Stream added, broadcasting: 3\nI1014 23:50:58.946807 2202 log.go:181] (0xc000e4f080) Reply frame received for 3\nI1014 23:50:58.946851 2202 log.go:181] (0xc000e4f080) (0xc000326fa0) Create stream\nI1014 23:50:58.946860 2202 log.go:181] (0xc000e4f080) (0xc000326fa0) Stream added, broadcasting: 5\nI1014 23:50:58.948200 2202 log.go:181] (0xc000e4f080) Reply frame received for 5\nI1014 23:50:59.021421 2202 log.go:181] (0xc000e4f080) Data frame received for 5\nI1014 23:50:59.021461 2202 log.go:181] (0xc000326fa0) (5) Data frame handling\nI1014 23:50:59.021482 2202 log.go:181] (0xc000326fa0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1014 23:50:59.048617 2202 log.go:181] (0xc000e4f080) Data frame received for 3\nI1014 23:50:59.048675 2202 log.go:181] (0xc00002b180) (3) Data frame handling\nI1014 23:50:59.048700 2202 log.go:181] (0xc000e4f080) Data frame received for 5\nI1014 23:50:59.048717 2202 log.go:181] (0xc000326fa0) (5) Data frame handling\nI1014 23:50:59.048731 2202 log.go:181] (0xc00002b180) (3) Data frame sent\nI1014 23:50:59.048753 2202 log.go:181] (0xc000e4f080) Data frame received for 3\nI1014 23:50:59.048765 2202 log.go:181] (0xc00002b180) (3) Data frame handling\nI1014 23:50:59.050355 2202 log.go:181] (0xc000e4f080) Data frame received for 1\nI1014 23:50:59.050406 2202 log.go:181] (0xc00002a8c0) (1) Data frame handling\nI1014 23:50:59.050447 2202 log.go:181] (0xc00002a8c0) (1) Data frame sent\nI1014 23:50:59.050471 2202 log.go:181] (0xc000e4f080) (0xc00002a8c0) Stream removed, broadcasting: 1\nI1014 23:50:59.050645 2202 log.go:181] (0xc000e4f080) Go away received\nI1014 23:50:59.050920 2202 log.go:181] (0xc000e4f080) (0xc00002a8c0) Stream removed, broadcasting: 1\nI1014 23:50:59.050941 2202 log.go:181] (0xc000e4f080) (0xc00002b180) Stream removed, broadcasting: 3\nI1014 23:50:59.050953 2202 log.go:181] (0xc000e4f080) (0xc000326fa0) Stream removed, broadcasting: 5\n" Oct 14 23:50:59.057: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 14 23:50:59.057: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 14 23:50:59.057: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:50:59.060: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Oct 14 23:51:09.067: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 14 23:51:09.068: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 14 23:51:09.068: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 14 23:51:09.130: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:09.130: INFO: ss-0 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC }] Oct 14 23:51:09.130: INFO: ss-1 leguer-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:09.130: INFO: ss-2 leguer-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:09.130: INFO: Oct 14 23:51:09.130: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 14 23:51:10.135: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:10.135: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC }] Oct 14 23:51:10.135: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:10.135: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:10.135: INFO: Oct 14 23:51:10.135: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 14 23:51:11.240: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:11.241: INFO: ss-0 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC }] Oct 14 23:51:11.241: INFO: ss-1 leguer-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:11.241: INFO: ss-2 leguer-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:11.241: INFO: Oct 14 23:51:11.241: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 14 23:51:12.246: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:12.246: INFO: ss-0 leguer-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:17 +0000 UTC }] Oct 14 23:51:12.246: INFO: ss-1 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:12.246: INFO: ss-2 leguer-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:12.246: INFO: Oct 14 23:51:12.246: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 14 23:51:13.251: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:13.251: INFO: ss-1 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:13.251: INFO: Oct 14 23:51:13.251: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 23:51:14.257: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:14.257: INFO: ss-1 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:14.257: INFO: Oct 14 23:51:14.257: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 23:51:15.261: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:15.261: INFO: ss-1 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:15.261: INFO: Oct 14 23:51:15.261: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 23:51:16.265: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:16.265: INFO: ss-1 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:16.265: INFO: Oct 14 23:51:16.265: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 23:51:17.315: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:17.315: INFO: ss-1 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:17.315: INFO: Oct 14 23:51:17.315: INFO: StatefulSet ss has not reached scale 0, at 1 Oct 14 23:51:18.321: INFO: POD NODE PHASE GRACE CONDITIONS Oct 14 23:51:18.321: INFO: ss-1 leguer-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:59 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-14 23:50:37 +0000 UTC }] Oct 14 23:51:18.321: INFO: Oct 14 23:51:18.321: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9174 Oct 14 23:51:19.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:51:19.487: INFO: rc: 1 Oct 14 23:51:19.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 14 23:51:29.487: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:51:29.587: INFO: rc: 1 Oct 14 23:51:29.587: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:51:39.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:51:39.691: INFO: rc: 1 Oct 14 23:51:39.691: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:51:49.691: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:51:49.791: INFO: rc: 1 Oct 14 23:51:49.791: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:51:59.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:51:59.920: INFO: rc: 1 Oct 14 23:51:59.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:52:09.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:52:10.033: INFO: rc: 1 Oct 14 23:52:10.033: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:52:20.034: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:52:20.136: INFO: rc: 1 Oct 14 23:52:20.136: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:52:30.137: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:52:30.238: INFO: rc: 1 Oct 14 23:52:30.238: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:52:40.238: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:52:40.353: INFO: rc: 1 Oct 14 23:52:40.353: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:52:50.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:52:50.457: INFO: rc: 1 Oct 14 23:52:50.457: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:53:00.457: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:53:00.572: INFO: rc: 1 Oct 14 23:53:00.572: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:53:10.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:53:10.683: INFO: rc: 1 Oct 14 23:53:10.683: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:53:20.684: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:53:20.790: INFO: rc: 1 Oct 14 23:53:20.790: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:53:30.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:53:30.899: INFO: rc: 1 Oct 14 23:53:30.900: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:53:40.900: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:53:41.003: INFO: rc: 1 Oct 14 23:53:41.003: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:53:51.004: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:53:51.120: INFO: rc: 1 Oct 14 23:53:51.120: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:54:01.120: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:54:01.227: INFO: rc: 1 Oct 14 23:54:01.227: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:54:11.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:54:11.343: INFO: rc: 1 Oct 14 23:54:11.343: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:54:21.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:54:21.464: INFO: rc: 1 Oct 14 23:54:21.464: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:54:31.464: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:54:31.587: INFO: rc: 1 Oct 14 23:54:31.587: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:54:41.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:54:41.696: INFO: rc: 1 Oct 14 23:54:41.696: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:54:51.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:54:51.804: INFO: rc: 1 Oct 14 23:54:51.804: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:55:01.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:55:04.699: INFO: rc: 1 Oct 14 23:55:04.699: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:55:14.699: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:55:14.828: INFO: rc: 1 Oct 14 23:55:14.828: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:55:24.829: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:55:24.930: INFO: rc: 1 Oct 14 23:55:24.930: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:55:34.931: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:55:35.026: INFO: rc: 1 Oct 14 23:55:35.026: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:55:45.026: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:55:45.134: INFO: rc: 1 Oct 14 23:55:45.134: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:55:55.134: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:55:55.240: INFO: rc: 1 Oct 14 23:55:55.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:56:05.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:56:05.351: INFO: rc: 1 Oct 14 23:56:05.351: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:56:15.351: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:56:15.459: INFO: rc: 1 Oct 14 23:56:15.460: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Oct 14 23:56:25.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9174 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 14 23:56:25.571: INFO: rc: 1 Oct 14 23:56:25.571: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: Oct 14 23:56:25.571: INFO: Scaling statefulset ss to 0 Oct 14 23:56:25.602: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 14 23:56:25.604: INFO: Deleting all statefulset in ns statefulset-9174 Oct 14 23:56:25.606: INFO: Scaling statefulset ss to 0 Oct 14 23:56:25.614: INFO: Waiting for statefulset status.replicas updated to 0 Oct 14 23:56:25.616: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:56:25.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9174" for this suite. • [SLOW TEST:368.737 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":207,"skipped":3241,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:56:25.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 14 23:56:25.753: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b03f7fc6-2877-4ed6-aca1-faa89699f7d1" in namespace "downward-api-8846" to be "Succeeded or Failed" Oct 14 23:56:25.766: INFO: Pod "downwardapi-volume-b03f7fc6-2877-4ed6-aca1-faa89699f7d1": Phase="Pending", Reason="", readiness=false. Elapsed: 12.280909ms Oct 14 23:56:27.770: INFO: Pod "downwardapi-volume-b03f7fc6-2877-4ed6-aca1-faa89699f7d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016695434s Oct 14 23:56:29.775: INFO: Pod "downwardapi-volume-b03f7fc6-2877-4ed6-aca1-faa89699f7d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021404543s STEP: Saw pod success Oct 14 23:56:29.775: INFO: Pod "downwardapi-volume-b03f7fc6-2877-4ed6-aca1-faa89699f7d1" satisfied condition "Succeeded or Failed" Oct 14 23:56:29.778: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-b03f7fc6-2877-4ed6-aca1-faa89699f7d1 container client-container: STEP: delete the pod Oct 14 23:56:29.854: INFO: Waiting for pod downwardapi-volume-b03f7fc6-2877-4ed6-aca1-faa89699f7d1 to disappear Oct 14 23:56:29.867: INFO: Pod downwardapi-volume-b03f7fc6-2877-4ed6-aca1-faa89699f7d1 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:56:29.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8846" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":208,"skipped":3242,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:56:29.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:56:30.005: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:56:31.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4283" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":209,"skipped":3245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:56:31.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 14 23:56:31.410: INFO: Waiting up to 1m0s for all nodes to be ready Oct 14 23:57:31.435: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 14 23:57:31.451: INFO: Created pod: pod0-sched-preemption-low-priority Oct 14 23:57:31.504: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:57:55.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-2039" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:84.373 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":210,"skipped":3268,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:57:55.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 14 23:57:56.675: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 14 23:57:58.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316676, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316676, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316676, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316676, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 14 23:58:00.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316676, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316676, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316676, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738316676, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 14 23:58:03.811: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 14 23:58:03.831: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:58:03.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9135" for this suite. STEP: Destroying namespace "webhook-9135-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.390 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":211,"skipped":3279,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:58:04.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 14 23:58:04.092: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 14 23:58:04.099: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:04.115: INFO: Number of nodes with available pods: 0 Oct 14 23:58:04.115: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:58:05.121: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:05.126: INFO: Number of nodes with available pods: 0 Oct 14 23:58:05.126: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:58:06.328: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:06.471: INFO: Number of nodes with available pods: 0 Oct 14 23:58:06.471: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:58:07.121: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:07.125: INFO: Number of nodes with available pods: 0 Oct 14 23:58:07.125: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:58:08.120: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:08.124: INFO: Number of nodes with available pods: 2 Oct 14 23:58:08.124: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 14 23:58:08.152: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:08.152: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:08.172: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:09.182: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:09.182: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:09.185: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:10.179: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:10.179: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:10.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:11.177: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:11.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:11.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:12.178: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:12.178: INFO: Pod daemon-set-bs44m is not available Oct 14 23:58:12.178: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:12.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:13.177: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:13.177: INFO: Pod daemon-set-bs44m is not available Oct 14 23:58:13.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:13.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:14.177: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:14.177: INFO: Pod daemon-set-bs44m is not available Oct 14 23:58:14.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:14.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:15.177: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:15.177: INFO: Pod daemon-set-bs44m is not available Oct 14 23:58:15.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:15.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:16.177: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:16.177: INFO: Pod daemon-set-bs44m is not available Oct 14 23:58:16.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:16.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:17.183: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:17.183: INFO: Pod daemon-set-bs44m is not available Oct 14 23:58:17.183: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:17.186: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:18.177: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:18.177: INFO: Pod daemon-set-bs44m is not available Oct 14 23:58:18.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:18.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:19.177: INFO: Wrong image for pod: daemon-set-bs44m. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:19.177: INFO: Pod daemon-set-bs44m is not available Oct 14 23:58:19.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:19.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:20.177: INFO: Pod daemon-set-44zzm is not available Oct 14 23:58:20.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:20.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:21.178: INFO: Pod daemon-set-44zzm is not available Oct 14 23:58:21.178: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:21.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:22.177: INFO: Pod daemon-set-44zzm is not available Oct 14 23:58:22.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:22.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:23.176: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:23.180: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:24.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:24.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:25.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:25.177: INFO: Pod daemon-set-q5jts is not available Oct 14 23:58:25.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:26.178: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:26.178: INFO: Pod daemon-set-q5jts is not available Oct 14 23:58:26.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:27.178: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:27.178: INFO: Pod daemon-set-q5jts is not available Oct 14 23:58:27.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:28.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:28.177: INFO: Pod daemon-set-q5jts is not available Oct 14 23:58:28.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:29.178: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:29.178: INFO: Pod daemon-set-q5jts is not available Oct 14 23:58:29.183: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:30.177: INFO: Wrong image for pod: daemon-set-q5jts. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 14 23:58:30.177: INFO: Pod daemon-set-q5jts is not available Oct 14 23:58:30.182: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:31.176: INFO: Pod daemon-set-5zj6f is not available Oct 14 23:58:31.181: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 14 23:58:31.184: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:31.188: INFO: Number of nodes with available pods: 1 Oct 14 23:58:31.188: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:58:32.193: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:32.197: INFO: Number of nodes with available pods: 1 Oct 14 23:58:32.197: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:58:33.193: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:33.196: INFO: Number of nodes with available pods: 1 Oct 14 23:58:33.196: INFO: Node leguer-worker is running more than one daemon pod Oct 14 23:58:34.193: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 14 23:58:34.196: INFO: Number of nodes with available pods: 2 Oct 14 23:58:34.196: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7880, will wait for the garbage collector to delete the pods Oct 14 23:58:34.270: INFO: Deleting DaemonSet.extensions daemon-set took: 6.900553ms Oct 14 23:58:34.670: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.231781ms Oct 14 23:58:40.393: INFO: Number of nodes with available pods: 0 Oct 14 23:58:40.393: INFO: Number of running nodes: 0, number of available pods: 0 Oct 14 23:58:40.395: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7880/daemonsets","resourceVersion":"2964092"},"items":null} Oct 14 23:58:40.397: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7880/pods","resourceVersion":"2964092"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:58:40.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7880" for this suite. • [SLOW TEST:36.381 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":212,"skipped":3289,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:58:40.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-e2c07f82-a3bd-4559-904d-02665ab8ceb2 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:58:40.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-547" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":213,"skipped":3303,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:58:40.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Oct 14 23:58:40.541: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1396' Oct 14 23:58:40.910: INFO: stderr: "" Oct 14 23:58:40.910: INFO: stdout: "pod/pause created\n" Oct 14 23:58:40.910: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 14 23:58:40.911: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1396" to be "running and ready" Oct 14 23:58:40.920: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 9.717667ms Oct 14 23:58:42.925: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014291636s Oct 14 23:58:44.986: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.075377248s Oct 14 23:58:44.986: INFO: Pod "pause" satisfied condition "running and ready" Oct 14 23:58:44.986: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Oct 14 23:58:44.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1396' Oct 14 23:58:45.107: INFO: stderr: "" Oct 14 23:58:45.107: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 14 23:58:45.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1396' Oct 14 23:58:45.221: INFO: stderr: "" Oct 14 23:58:45.221: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 14 23:58:45.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1396' Oct 14 23:58:45.330: INFO: stderr: "" Oct 14 23:58:45.330: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 14 23:58:45.331: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1396' Oct 14 23:58:45.462: INFO: stderr: "" Oct 14 23:58:45.462: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Oct 14 23:58:45.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1396' Oct 14 23:58:45.607: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 14 23:58:45.607: INFO: stdout: "pod \"pause\" force deleted\n" Oct 14 23:58:45.607: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1396' Oct 14 23:58:45.919: INFO: stderr: "No resources found in kubectl-1396 namespace.\n" Oct 14 23:58:45.919: INFO: stdout: "" Oct 14 23:58:45.919: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1396 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 14 23:58:46.031: INFO: stderr: "" Oct 14 23:58:46.031: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:58:46.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1396" for this suite. • [SLOW TEST:5.583 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":214,"skipped":3340,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:58:46.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 23:58:46.214: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 23:58:46.322: INFO: Waiting for terminating namespaces to be deleted... Oct 14 23:58:46.333: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Oct 14 23:58:46.479: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:58:46.479: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 23:58:46.479: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:58:46.479: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 23:58:46.479: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Oct 14 23:58:46.490: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:58:46.490: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 23:58:46.490: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Oct 14 23:58:46.490: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node leguer-worker STEP: verifying the node has the label node leguer-worker2 Oct 14 23:58:46.648: INFO: Pod kindnet-lc95n requesting resource cpu=100m on Node leguer-worker Oct 14 23:58:46.648: INFO: Pod kindnet-nffr7 requesting resource cpu=100m on Node leguer-worker2 Oct 14 23:58:46.648: INFO: Pod kube-proxy-bmzvg requesting resource cpu=0m on Node leguer-worker Oct 14 23:58:46.648: INFO: Pod kube-proxy-sxhc5 requesting resource cpu=0m on Node leguer-worker2 STEP: Starting Pods to consume most of the cluster CPU. Oct 14 23:58:46.648: INFO: Creating a pod which consumes cpu=11130m on Node leguer-worker Oct 14 23:58:46.656: INFO: Creating a pod which consumes cpu=11130m on Node leguer-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f.163e0148b4811524], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f.163e014909f4a86d], Reason = [Started], Message = [Started container filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f] STEP: Considering event: Type = [Normal], Name = [filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e.163e014871313c13], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e.163e01481fbb9426], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8478/filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e to leguer-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e.163e0148dcbd7db5], Reason = [Started], Message = [Started container filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f.163e014823e1d68e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8478/filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f to leguer-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f.163e0148faa79af8], Reason = [Created], Message = [Created container filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f] STEP: Considering event: Type = [Normal], Name = [filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e.163e0148c5f2b6ed], Reason = [Created], Message = [Created container filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e] STEP: Considering event: Type = [Warning], Name = [additional-pod.163e01498c1cf320], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.163e01498eab7071], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node leguer-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node leguer-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:58:53.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8478" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.832 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":215,"skipped":3342,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:58:53.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-c2d8d94e-6db2-48c6-8b28-93dddf8d2860 STEP: Creating a pod to test consume secrets Oct 14 23:58:53.988: INFO: Waiting up to 5m0s for pod "pod-secrets-67347579-75c9-43b1-959f-25434615edb9" in namespace "secrets-4926" to be "Succeeded or Failed" Oct 14 23:58:54.006: INFO: Pod "pod-secrets-67347579-75c9-43b1-959f-25434615edb9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.110694ms Oct 14 23:58:56.010: INFO: Pod "pod-secrets-67347579-75c9-43b1-959f-25434615edb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02192249s Oct 14 23:58:58.014: INFO: Pod "pod-secrets-67347579-75c9-43b1-959f-25434615edb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025563087s STEP: Saw pod success Oct 14 23:58:58.014: INFO: Pod "pod-secrets-67347579-75c9-43b1-959f-25434615edb9" satisfied condition "Succeeded or Failed" Oct 14 23:58:58.016: INFO: Trying to get logs from node leguer-worker pod pod-secrets-67347579-75c9-43b1-959f-25434615edb9 container secret-volume-test: STEP: delete the pod Oct 14 23:58:58.179: INFO: Waiting for pod pod-secrets-67347579-75c9-43b1-959f-25434615edb9 to disappear Oct 14 23:58:58.182: INFO: Pod pod-secrets-67347579-75c9-43b1-959f-25434615edb9 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 14 23:58:58.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4926" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3353,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 14 23:58:58.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 14 23:58:58.468: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 14 23:58:58.491: INFO: Waiting for terminating namespaces to be deleted... Oct 14 23:58:58.494: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Oct 14 23:58:58.499: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:58:58.499: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 23:58:58.499: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:58:58.499: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 23:58:58.499: INFO: filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e from sched-pred-8478 started at 2020-10-14 23:58:46 +0000 UTC (1 container statuses recorded) Oct 14 23:58:58.499: INFO: Container filler-pod-224d8a22-a17b-4233-b230-24af9d7ac93e ready: true, restart count 0 Oct 14 23:58:58.499: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Oct 14 23:58:58.504: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 14 23:58:58.504: INFO: Container kindnet-cni ready: true, restart count 0 Oct 14 23:58:58.504: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Oct 14 23:58:58.504: INFO: Container kube-proxy ready: true, restart count 0 Oct 14 23:58:58.504: INFO: filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f from sched-pred-8478 started at 2020-10-14 23:58:46 +0000 UTC (1 container statuses recorded) Oct 14 23:58:58.504: INFO: Container filler-pod-2c1030fa-01a5-42e0-934f-13e84a65b40f ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e2bf0e5c-2a0a-48d9-8a7b-9717e1768547 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-e2bf0e5c-2a0a-48d9-8a7b-9717e1768547 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-e2bf0e5c-2a0a-48d9-8a7b-9717e1768547 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:04:06.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-987" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:308.526 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":217,"skipped":3374,"failed":0} S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:04:06.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:04:06.828: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 15 00:04:11.832: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 15 00:04:11.832: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 15 00:04:13.836: INFO: Creating deployment "test-rollover-deployment" Oct 15 00:04:13.861: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 15 00:04:15.876: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 15 00:04:15.882: INFO: Ensure that both replica sets have 1 created replica Oct 15 00:04:15.889: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 15 00:04:15.897: INFO: Updating deployment test-rollover-deployment Oct 15 00:04:15.897: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 15 00:04:17.923: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 15 00:04:17.930: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 15 00:04:17.937: INFO: all replica sets need to contain the pod-template-hash label Oct 15 00:04:17.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317056, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 15 00:04:19.945: INFO: all replica sets need to contain the pod-template-hash label Oct 15 00:04:19.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317058, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 15 00:04:21.946: INFO: all replica sets need to contain the pod-template-hash label Oct 15 00:04:21.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317058, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 15 00:04:23.945: INFO: all replica sets need to contain the pod-template-hash label Oct 15 00:04:23.945: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317058, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 15 00:04:25.946: INFO: all replica sets need to contain the pod-template-hash label Oct 15 00:04:25.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317058, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 15 00:04:27.946: INFO: all replica sets need to contain the pod-template-hash label Oct 15 00:04:27.946: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317058, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317053, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 15 00:04:29.946: INFO: Oct 15 00:04:29.946: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 15 00:04:29.954: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7858 /apis/apps/v1/namespaces/deployment-7858/deployments/test-rollover-deployment 79c2fdc2-e875-4103-aa03-bc52cebb53d9 2965363 2 2020-10-15 00:04:13 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-15 00:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-15 00:04:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c5a948 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-15 00:04:13 +0000 UTC,LastTransitionTime:2020-10-15 00:04:13 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-10-15 00:04:28 +0000 UTC,LastTransitionTime:2020-10-15 00:04:13 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 15 00:04:29.957: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-7858 /apis/apps/v1/namespaces/deployment-7858/replicasets/test-rollover-deployment-5797c7764 fa75c4f9-7b0e-4c47-8752-094feba24298 2965352 2 2020-10-15 00:04:15 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 79c2fdc2-e875-4103-aa03-bc52cebb53d9 0xc003c5ae30 0xc003c5ae31}] [] [{kube-controller-manager Update apps/v1 2020-10-15 00:04:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79c2fdc2-e875-4103-aa03-bc52cebb53d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c5aea8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 15 00:04:29.957: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 15 00:04:29.957: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7858 /apis/apps/v1/namespaces/deployment-7858/replicasets/test-rollover-controller 4edbe118-414d-48b8-ae3f-5199bbc197de 2965362 2 2020-10-15 00:04:06 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 79c2fdc2-e875-4103-aa03-bc52cebb53d9 0xc003c5ad1f 0xc003c5ad30}] [] [{e2e.test Update apps/v1 2020-10-15 00:04:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-15 00:04:28 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79c2fdc2-e875-4103-aa03-bc52cebb53d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003c5adc8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 15 00:04:29.957: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-7858 /apis/apps/v1/namespaces/deployment-7858/replicasets/test-rollover-deployment-78bc8b888c 1fb74634-5599-4df5-99ed-86679aaea600 2965306 2 2020-10-15 00:04:13 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 79c2fdc2-e875-4103-aa03-bc52cebb53d9 0xc003c5af17 0xc003c5af18}] [] [{kube-controller-manager Update apps/v1 2020-10-15 00:04:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79c2fdc2-e875-4103-aa03-bc52cebb53d9\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c5afa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 15 00:04:29.960: INFO: Pod "test-rollover-deployment-5797c7764-cb77q" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-cb77q test-rollover-deployment-5797c7764- deployment-7858 /api/v1/namespaces/deployment-7858/pods/test-rollover-deployment-5797c7764-cb77q da814da8-df86-4eb0-98f3-09890d921846 2965320 0 2020-10-15 00:04:15 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 fa75c4f9-7b0e-4c47-8752-094feba24298 0xc003c5b530 0xc003c5b531}] [] [{kube-controller-manager Update v1 2020-10-15 00:04:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa75c4f9-7b0e-4c47-8752-094feba24298\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-15 00:04:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.93\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qjrrf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qjrrf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qjrrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-15 00:04:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-15 00:04:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-15 00:04:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-15 00:04:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.93,StartTime:2020-10-15 00:04:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-15 00:04:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://e62313910ddafbb393cec070f144e6dd122968f2afe8d5bcbb964b958eebba39,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.93,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:04:29.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7858" for this suite. • [SLOW TEST:23.251 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":218,"skipped":3375,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:04:29.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 15 00:04:30.829: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 15 00:04:32.839: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317070, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317070, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317070, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317070, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 15 00:04:35.909: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:04:35.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9415-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:04:37.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-116" for this suite. STEP: Destroying namespace "webhook-116-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.163 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":219,"skipped":3429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:04:37.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 15 00:04:37.265: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9b5c550-dbcd-41ed-8f01-662f5c30b697" in namespace "projected-7378" to be "Succeeded or Failed" Oct 15 00:04:37.306: INFO: Pod "downwardapi-volume-b9b5c550-dbcd-41ed-8f01-662f5c30b697": Phase="Pending", Reason="", readiness=false. Elapsed: 40.829525ms Oct 15 00:04:39.311: INFO: Pod "downwardapi-volume-b9b5c550-dbcd-41ed-8f01-662f5c30b697": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045384988s Oct 15 00:04:41.316: INFO: Pod "downwardapi-volume-b9b5c550-dbcd-41ed-8f01-662f5c30b697": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050652895s STEP: Saw pod success Oct 15 00:04:41.316: INFO: Pod "downwardapi-volume-b9b5c550-dbcd-41ed-8f01-662f5c30b697" satisfied condition "Succeeded or Failed" Oct 15 00:04:41.319: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-b9b5c550-dbcd-41ed-8f01-662f5c30b697 container client-container: STEP: delete the pod Oct 15 00:04:41.387: INFO: Waiting for pod downwardapi-volume-b9b5c550-dbcd-41ed-8f01-662f5c30b697 to disappear Oct 15 00:04:41.390: INFO: Pod downwardapi-volume-b9b5c550-dbcd-41ed-8f01-662f5c30b697 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:04:41.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7378" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":220,"skipped":3508,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:04:41.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-1f618f6d-663a-4bc0-a215-f82548659c21 STEP: Creating secret with name s-test-opt-upd-2b080376-6b18-4560-bbe4-32893b4a01c6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1f618f6d-663a-4bc0-a215-f82548659c21 STEP: Updating secret s-test-opt-upd-2b080376-6b18-4560-bbe4-32893b4a01c6 STEP: Creating secret with name s-test-opt-create-9706083f-340e-41e3-8470-807f3f0a58af STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:06:14.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7982" for this suite. • [SLOW TEST:92.930 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":221,"skipped":3539,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:06:14.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:07:14.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2960" for this suite. • [SLOW TEST:60.105 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:07:14.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7477 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-7477 I1015 00:07:14.621055 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7477, replica count: 2 I1015 00:07:17.671428 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1015 00:07:20.671665 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 15 00:07:20.671: INFO: Creating new exec pod Oct 15 00:07:25.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-7477 execpodcl6cm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 15 00:07:29.043: INFO: stderr: "I1015 00:07:28.938700 2926 log.go:181] (0xc0002f3550) (0xc000d3c500) Create stream\nI1015 00:07:28.938769 2926 log.go:181] (0xc0002f3550) (0xc000d3c500) Stream added, broadcasting: 1\nI1015 00:07:28.941070 2926 log.go:181] (0xc0002f3550) Reply frame received for 1\nI1015 00:07:28.941134 2926 log.go:181] (0xc0002f3550) (0xc000b8c000) Create stream\nI1015 00:07:28.941160 2926 log.go:181] (0xc0002f3550) (0xc000b8c000) Stream added, broadcasting: 3\nI1015 00:07:28.944134 2926 log.go:181] (0xc0002f3550) Reply frame received for 3\nI1015 00:07:28.944178 2926 log.go:181] (0xc0002f3550) (0xc000666000) Create stream\nI1015 00:07:28.944193 2926 log.go:181] (0xc0002f3550) (0xc000666000) Stream added, broadcasting: 5\nI1015 00:07:28.945095 2926 log.go:181] (0xc0002f3550) Reply frame received for 5\nI1015 00:07:29.035367 2926 log.go:181] (0xc0002f3550) Data frame received for 5\nI1015 00:07:29.035399 2926 log.go:181] (0xc000666000) (5) Data frame handling\nI1015 00:07:29.035411 2926 log.go:181] (0xc000666000) (5) Data frame sent\nI1015 00:07:29.035417 2926 log.go:181] (0xc0002f3550) Data frame received for 5\nI1015 00:07:29.035421 2926 log.go:181] (0xc000666000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1015 00:07:29.035437 2926 log.go:181] (0xc000666000) (5) Data frame sent\nI1015 00:07:29.035857 2926 log.go:181] (0xc0002f3550) Data frame received for 5\nI1015 00:07:29.035870 2926 log.go:181] (0xc000666000) (5) Data frame handling\nI1015 00:07:29.035900 2926 log.go:181] (0xc0002f3550) Data frame received for 3\nI1015 00:07:29.035916 2926 log.go:181] (0xc000b8c000) (3) Data frame handling\nI1015 00:07:29.038060 2926 log.go:181] (0xc0002f3550) Data frame received for 1\nI1015 00:07:29.038094 2926 log.go:181] (0xc000d3c500) (1) Data frame handling\nI1015 00:07:29.038109 2926 log.go:181] (0xc000d3c500) (1) Data frame sent\nI1015 00:07:29.038129 2926 log.go:181] (0xc0002f3550) (0xc000d3c500) Stream removed, broadcasting: 1\nI1015 00:07:29.038149 2926 log.go:181] (0xc0002f3550) Go away received\nI1015 00:07:29.038447 2926 log.go:181] (0xc0002f3550) (0xc000d3c500) Stream removed, broadcasting: 1\nI1015 00:07:29.038460 2926 log.go:181] (0xc0002f3550) (0xc000b8c000) Stream removed, broadcasting: 3\nI1015 00:07:29.038467 2926 log.go:181] (0xc0002f3550) (0xc000666000) Stream removed, broadcasting: 5\n" Oct 15 00:07:29.043: INFO: stdout: "" Oct 15 00:07:29.044: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-7477 execpodcl6cm -- /bin/sh -x -c nc -zv -t -w 2 10.105.79.238 80' Oct 15 00:07:29.260: INFO: stderr: "I1015 00:07:29.175585 2944 log.go:181] (0xc000ae2dc0) (0xc000b20640) Create stream\nI1015 00:07:29.175630 2944 log.go:181] (0xc000ae2dc0) (0xc000b20640) Stream added, broadcasting: 1\nI1015 00:07:29.181932 2944 log.go:181] (0xc000ae2dc0) Reply frame received for 1\nI1015 00:07:29.181988 2944 log.go:181] (0xc000ae2dc0) (0xc0009b0000) Create stream\nI1015 00:07:29.182011 2944 log.go:181] (0xc000ae2dc0) (0xc0009b0000) Stream added, broadcasting: 3\nI1015 00:07:29.183337 2944 log.go:181] (0xc000ae2dc0) Reply frame received for 3\nI1015 00:07:29.183370 2944 log.go:181] (0xc000ae2dc0) (0xc000b20000) Create stream\nI1015 00:07:29.183379 2944 log.go:181] (0xc000ae2dc0) (0xc000b20000) Stream added, broadcasting: 5\nI1015 00:07:29.184203 2944 log.go:181] (0xc000ae2dc0) Reply frame received for 5\nI1015 00:07:29.253635 2944 log.go:181] (0xc000ae2dc0) Data frame received for 3\nI1015 00:07:29.253669 2944 log.go:181] (0xc0009b0000) (3) Data frame handling\nI1015 00:07:29.253691 2944 log.go:181] (0xc000ae2dc0) Data frame received for 5\nI1015 00:07:29.253705 2944 log.go:181] (0xc000b20000) (5) Data frame handling\nI1015 00:07:29.253717 2944 log.go:181] (0xc000b20000) (5) Data frame sent\nI1015 00:07:29.253723 2944 log.go:181] (0xc000ae2dc0) Data frame received for 5\n+ nc -zv -t -w 2 10.105.79.238 80\nConnection to 10.105.79.238 80 port [tcp/http] succeeded!\nI1015 00:07:29.253730 2944 log.go:181] (0xc000b20000) (5) Data frame handling\nI1015 00:07:29.255218 2944 log.go:181] (0xc000ae2dc0) Data frame received for 1\nI1015 00:07:29.255236 2944 log.go:181] (0xc000b20640) (1) Data frame handling\nI1015 00:07:29.255257 2944 log.go:181] (0xc000b20640) (1) Data frame sent\nI1015 00:07:29.255467 2944 log.go:181] (0xc000ae2dc0) (0xc000b20640) Stream removed, broadcasting: 1\nI1015 00:07:29.255529 2944 log.go:181] (0xc000ae2dc0) Go away received\nI1015 00:07:29.255925 2944 log.go:181] (0xc000ae2dc0) (0xc000b20640) Stream removed, broadcasting: 1\nI1015 00:07:29.255941 2944 log.go:181] (0xc000ae2dc0) (0xc0009b0000) Stream removed, broadcasting: 3\nI1015 00:07:29.255947 2944 log.go:181] (0xc000ae2dc0) (0xc000b20000) Stream removed, broadcasting: 5\n" Oct 15 00:07:29.260: INFO: stdout: "" Oct 15 00:07:29.260: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:07:29.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7477" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.898 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":223,"skipped":3624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:07:29.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:07:29.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4796" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":224,"skipped":3659,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:07:29.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:07:29.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7989" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":225,"skipped":3659,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:07:29.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 15 00:07:30.170: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 15 00:07:32.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317250, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317250, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317250, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317250, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 15 00:07:34.184: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317250, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317250, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317250, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317250, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 15 00:07:37.225: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:07:37.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5164" for this suite. STEP: Destroying namespace "webhook-5164-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.375 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":226,"skipped":3664,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:07:37.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:07:38.064: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 15 00:07:41.043: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7909 create -f -' Oct 15 00:07:44.660: INFO: stderr: "" Oct 15 00:07:44.660: INFO: stdout: "e2e-test-crd-publish-openapi-3205-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 15 00:07:44.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7909 delete e2e-test-crd-publish-openapi-3205-crds test-foo' Oct 15 00:07:44.780: INFO: stderr: "" Oct 15 00:07:44.780: INFO: stdout: "e2e-test-crd-publish-openapi-3205-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 15 00:07:44.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7909 apply -f -' Oct 15 00:07:45.088: INFO: stderr: "" Oct 15 00:07:45.088: INFO: stdout: "e2e-test-crd-publish-openapi-3205-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 15 00:07:45.088: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7909 delete e2e-test-crd-publish-openapi-3205-crds test-foo' Oct 15 00:07:45.210: INFO: stderr: "" Oct 15 00:07:45.210: INFO: stdout: "e2e-test-crd-publish-openapi-3205-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 15 00:07:45.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7909 create -f -' Oct 15 00:07:45.459: INFO: rc: 1 Oct 15 00:07:45.460: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7909 apply -f -' Oct 15 00:07:45.737: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 15 00:07:45.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7909 create -f -' Oct 15 00:07:46.011: INFO: rc: 1 Oct 15 00:07:46.011: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7909 apply -f -' Oct 15 00:07:46.300: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 15 00:07:46.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3205-crds' Oct 15 00:07:46.581: INFO: stderr: "" Oct 15 00:07:46.581: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3205-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 15 00:07:46.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3205-crds.metadata' Oct 15 00:07:46.840: INFO: stderr: "" Oct 15 00:07:46.840: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3205-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 15 00:07:46.841: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3205-crds.spec' Oct 15 00:07:47.138: INFO: stderr: "" Oct 15 00:07:47.138: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3205-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 15 00:07:47.138: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3205-crds.spec.bars' Oct 15 00:07:47.415: INFO: stderr: "" Oct 15 00:07:47.415: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3205-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 15 00:07:47.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3205-crds.spec.bars2' Oct 15 00:07:47.705: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:07:50.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7909" for this suite. • [SLOW TEST:12.750 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":227,"skipped":3683,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:07:50.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:07:50.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5918" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":228,"skipped":3699,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:07:50.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5530 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5530 STEP: creating replication controller externalsvc in namespace services-5530 I1015 00:07:51.083526 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5530, replica count: 2 I1015 00:07:54.134041 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1015 00:07:57.134330 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 15 00:07:57.166: INFO: Creating new exec pod Oct 15 00:08:01.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-5530 execpod6lt84 -- /bin/sh -x -c nslookup clusterip-service.services-5530.svc.cluster.local' Oct 15 00:08:01.449: INFO: stderr: "I1015 00:08:01.333723 3196 log.go:181] (0xc0006c53f0) (0xc00061fe00) Create stream\nI1015 00:08:01.333770 3196 log.go:181] (0xc0006c53f0) (0xc00061fe00) Stream added, broadcasting: 1\nI1015 00:08:01.335838 3196 log.go:181] (0xc0006c53f0) Reply frame received for 1\nI1015 00:08:01.335870 3196 log.go:181] (0xc0006c53f0) (0xc0006443c0) Create stream\nI1015 00:08:01.335880 3196 log.go:181] (0xc0006c53f0) (0xc0006443c0) Stream added, broadcasting: 3\nI1015 00:08:01.336680 3196 log.go:181] (0xc0006c53f0) Reply frame received for 3\nI1015 00:08:01.336716 3196 log.go:181] (0xc0006c53f0) (0xc0007272c0) Create stream\nI1015 00:08:01.336734 3196 log.go:181] (0xc0006c53f0) (0xc0007272c0) Stream added, broadcasting: 5\nI1015 00:08:01.337657 3196 log.go:181] (0xc0006c53f0) Reply frame received for 5\nI1015 00:08:01.429778 3196 log.go:181] (0xc0006c53f0) Data frame received for 5\nI1015 00:08:01.429828 3196 log.go:181] (0xc0007272c0) (5) Data frame handling\nI1015 00:08:01.429843 3196 log.go:181] (0xc0007272c0) (5) Data frame sent\n+ nslookup clusterip-service.services-5530.svc.cluster.local\nI1015 00:08:01.441403 3196 log.go:181] (0xc0006c53f0) Data frame received for 3\nI1015 00:08:01.441422 3196 log.go:181] (0xc0006443c0) (3) Data frame handling\nI1015 00:08:01.441440 3196 log.go:181] (0xc0006443c0) (3) Data frame sent\nI1015 00:08:01.442437 3196 log.go:181] (0xc0006c53f0) Data frame received for 3\nI1015 00:08:01.442456 3196 log.go:181] (0xc0006443c0) (3) Data frame handling\nI1015 00:08:01.442466 3196 log.go:181] (0xc0006443c0) (3) Data frame sent\nI1015 00:08:01.442719 3196 log.go:181] (0xc0006c53f0) Data frame received for 5\nI1015 00:08:01.442738 3196 log.go:181] (0xc0007272c0) (5) Data frame handling\nI1015 00:08:01.442834 3196 log.go:181] (0xc0006c53f0) Data frame received for 3\nI1015 00:08:01.442851 3196 log.go:181] (0xc0006443c0) (3) Data frame handling\nI1015 00:08:01.445027 3196 log.go:181] (0xc0006c53f0) Data frame received for 1\nI1015 00:08:01.445048 3196 log.go:181] (0xc00061fe00) (1) Data frame handling\nI1015 00:08:01.445060 3196 log.go:181] (0xc00061fe00) (1) Data frame sent\nI1015 00:08:01.445077 3196 log.go:181] (0xc0006c53f0) (0xc00061fe00) Stream removed, broadcasting: 1\nI1015 00:08:01.445219 3196 log.go:181] (0xc0006c53f0) Go away received\nI1015 00:08:01.445523 3196 log.go:181] (0xc0006c53f0) (0xc00061fe00) Stream removed, broadcasting: 1\nI1015 00:08:01.445554 3196 log.go:181] (0xc0006c53f0) (0xc0006443c0) Stream removed, broadcasting: 3\nI1015 00:08:01.445569 3196 log.go:181] (0xc0006c53f0) (0xc0007272c0) Stream removed, broadcasting: 5\n" Oct 15 00:08:01.449: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5530.svc.cluster.local\tcanonical name = externalsvc.services-5530.svc.cluster.local.\nName:\texternalsvc.services-5530.svc.cluster.local\nAddress: 10.97.45.255\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5530, will wait for the garbage collector to delete the pods Oct 15 00:08:01.510: INFO: Deleting ReplicationController externalsvc took: 6.361501ms Oct 15 00:08:01.910: INFO: Terminating ReplicationController externalsvc pods took: 400.212895ms Oct 15 00:08:09.545: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:08:09.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5530" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:18.680 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":229,"skipped":3742,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:08:09.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 15 00:08:09.669: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7151' Oct 15 00:08:09.802: INFO: stderr: "" Oct 15 00:08:09.803: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 15 00:08:14.853: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7151 -o json' Oct 15 00:08:14.957: INFO: stderr: "" Oct 15 00:08:14.957: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-15T00:08:09Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-15T00:08:09Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.28\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-15T00:08:12Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7151\",\n \"resourceVersion\": \"2966530\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7151/pods/e2e-test-httpd-pod\",\n \"uid\": \"e69c5291-251a-493d-aec3-f97639608933\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-m6bgj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"leguer-worker\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-m6bgj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-m6bgj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-15T00:08:09Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-15T00:08:12Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-15T00:08:12Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-15T00:08:09Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://0d92c425555c803b95080a5855949bce2d48bd38a4f82bc8d5133543f182237d\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-10-15T00:08:12Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.18\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.28\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.28\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-15T00:08:09Z\"\n }\n}\n" STEP: replace the image in the pod Oct 15 00:08:14.957: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7151' Oct 15 00:08:15.612: INFO: stderr: "" Oct 15 00:08:15.612: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Oct 15 00:08:15.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7151' Oct 15 00:08:18.858: INFO: stderr: "" Oct 15 00:08:18.858: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:08:18.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7151" for this suite. • [SLOW TEST:9.308 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":230,"skipped":3747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:08:18.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 15 00:08:18.969: INFO: Waiting up to 5m0s for pod "pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1" in namespace "emptydir-186" to be "Succeeded or Failed" Oct 15 00:08:18.972: INFO: Pod "pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014594ms Oct 15 00:08:20.989: INFO: Pod "pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019780754s Oct 15 00:08:22.993: INFO: Pod "pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1": Phase="Running", Reason="", readiness=true. Elapsed: 4.024102021s Oct 15 00:08:25.024: INFO: Pod "pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.05572075s STEP: Saw pod success Oct 15 00:08:25.025: INFO: Pod "pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1" satisfied condition "Succeeded or Failed" Oct 15 00:08:25.027: INFO: Trying to get logs from node leguer-worker2 pod pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1 container test-container: STEP: delete the pod Oct 15 00:08:25.064: INFO: Waiting for pod pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1 to disappear Oct 15 00:08:25.074: INFO: Pod pod-a3f07bd7-3b8c-4b49-aa60-e80867aa88b1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:08:25.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-186" for this suite. • [SLOW TEST:6.193 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":231,"skipped":3778,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:08:25.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:08:25.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2126' Oct 15 00:08:25.456: INFO: stderr: "" Oct 15 00:08:25.456: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 15 00:08:25.456: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2126' Oct 15 00:08:25.769: INFO: stderr: "" Oct 15 00:08:25.769: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 15 00:08:26.774: INFO: Selector matched 1 pods for map[app:agnhost] Oct 15 00:08:26.774: INFO: Found 0 / 1 Oct 15 00:08:27.774: INFO: Selector matched 1 pods for map[app:agnhost] Oct 15 00:08:27.774: INFO: Found 0 / 1 Oct 15 00:08:28.774: INFO: Selector matched 1 pods for map[app:agnhost] Oct 15 00:08:28.774: INFO: Found 0 / 1 Oct 15 00:08:29.774: INFO: Selector matched 1 pods for map[app:agnhost] Oct 15 00:08:29.774: INFO: Found 1 / 1 Oct 15 00:08:29.774: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 15 00:08:29.777: INFO: Selector matched 1 pods for map[app:agnhost] Oct 15 00:08:29.777: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 15 00:08:29.777: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe pod agnhost-primary-rf6c9 --namespace=kubectl-2126' Oct 15 00:08:29.888: INFO: stderr: "" Oct 15 00:08:29.888: INFO: stdout: "Name: agnhost-primary-rf6c9\nNamespace: kubectl-2126\nPriority: 0\nNode: leguer-worker/172.18.0.18\nStart Time: Thu, 15 Oct 2020 00:08:25 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.29\nIPs:\n IP: 10.244.2.29\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://264466884f21e2453795b8c705fa663cbac3f08598942a2e45c6e13edd36c431\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 15 Oct 2020 00:08:28 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-djflm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-djflm:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-djflm\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2126/agnhost-primary-rf6c9 to leguer-worker\n Normal Pulled 3s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Oct 15 00:08:29.888: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-2126' Oct 15 00:08:30.025: INFO: stderr: "" Oct 15 00:08:30.025: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2126\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-rf6c9\n" Oct 15 00:08:30.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-2126' Oct 15 00:08:30.167: INFO: stderr: "" Oct 15 00:08:30.167: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2126\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.99.107.146\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.29:6379\nSession Affinity: None\nEvents: \n" Oct 15 00:08:30.181: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe node leguer-control-plane' Oct 15 00:08:30.342: INFO: stderr: "" Oct 15 00:08:30.342: INFO: stdout: "Name: leguer-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=leguer-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Oct 2020 09:51:01 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: leguer-control-plane\n AcquireTime: \n RenewTime: Thu, 15 Oct 2020 00:08:26 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 15 Oct 2020 00:07:12 +0000 Sun, 04 Oct 2020 09:50:57 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 15 Oct 2020 00:07:12 +0000 Sun, 04 Oct 2020 09:50:57 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 15 Oct 2020 00:07:12 +0000 Sun, 04 Oct 2020 09:50:57 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 15 Oct 2020 00:07:12 +0000 Sun, 04 Oct 2020 09:51:41 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.19\n Hostname: leguer-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 6326bc1b5ba447818239288d64d2cd76\n System UUID: 653741b7-2395-4557-a394-18309703661a\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-5ftzx 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 10d\n kube-system coredns-f9fd979d6-fx25r 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 10d\n kube-system etcd-leguer-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kindnet-sdmgv 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 10d\n kube-system kube-apiserver-leguer-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-controller-manager-leguer-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-proxy-x65h9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n kube-system kube-scheduler-leguer-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 10d\n local-path-storage local-path-provisioner-78776bfc44-7ptcx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 15 00:08:30.343: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config describe namespace kubectl-2126' Oct 15 00:08:30.450: INFO: stderr: "" Oct 15 00:08:30.450: INFO: stdout: "Name: kubectl-2126\nLabels: e2e-framework=kubectl\n e2e-run=02afe796-93df-403d-b7e6-808052deba20\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:08:30.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2126" for this suite. • [SLOW TEST:5.371 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":232,"skipped":3786,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:08:30.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 15 00:08:30.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9" in namespace "projected-9723" to be "Succeeded or Failed" Oct 15 00:08:30.571: INFO: Pod "downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.369877ms Oct 15 00:08:32.575: INFO: Pod "downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01234311s Oct 15 00:08:34.579: INFO: Pod "downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9": Phase="Running", Reason="", readiness=true. Elapsed: 4.016231634s Oct 15 00:08:36.587: INFO: Pod "downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02426314s STEP: Saw pod success Oct 15 00:08:36.587: INFO: Pod "downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9" satisfied condition "Succeeded or Failed" Oct 15 00:08:36.590: INFO: Trying to get logs from node leguer-worker2 pod downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9 container client-container: STEP: delete the pod Oct 15 00:08:36.635: INFO: Waiting for pod downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9 to disappear Oct 15 00:08:36.662: INFO: Pod downwardapi-volume-ef6fd8ba-2e7d-41c6-bb9e-d27e2806f5f9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:08:36.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9723" for this suite. • [SLOW TEST:6.217 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":233,"skipped":3794,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:08:36.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 15 00:08:37.347: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 15 00:08:39.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317317, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317317, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317317, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317317, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 15 00:08:42.458: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:08:42.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:08:43.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3477" for this suite. STEP: Destroying namespace "webhook-3477-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.048 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":234,"skipped":3817,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:08:43.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:08:50.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2972" for this suite. STEP: Destroying namespace "nsdeletetest-7652" for this suite. Oct 15 00:08:50.090: INFO: Namespace nsdeletetest-7652 was already deleted STEP: Destroying namespace "nsdeletetest-9685" for this suite. • [SLOW TEST:6.379 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":235,"skipped":3829,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:08:50.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 15 00:08:50.162: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 15 00:09:01.029: INFO: >>> kubeConfig: /root/.kube/config Oct 15 00:09:03.992: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:09:14.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7883" for this suite. • [SLOW TEST:24.778 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":236,"skipped":3844,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:09:14.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 15 00:09:15.010: INFO: Waiting up to 5m0s for pod "pod-73799737-d161-446c-8a22-365a4b69b972" in namespace "emptydir-5294" to be "Succeeded or Failed" Oct 15 00:09:15.013: INFO: Pod "pod-73799737-d161-446c-8a22-365a4b69b972": Phase="Pending", Reason="", readiness=false. Elapsed: 3.347847ms Oct 15 00:09:17.079: INFO: Pod "pod-73799737-d161-446c-8a22-365a4b69b972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069539208s Oct 15 00:09:19.084: INFO: Pod "pod-73799737-d161-446c-8a22-365a4b69b972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07413401s STEP: Saw pod success Oct 15 00:09:19.084: INFO: Pod "pod-73799737-d161-446c-8a22-365a4b69b972" satisfied condition "Succeeded or Failed" Oct 15 00:09:19.087: INFO: Trying to get logs from node leguer-worker pod pod-73799737-d161-446c-8a22-365a4b69b972 container test-container: STEP: delete the pod Oct 15 00:09:19.252: INFO: Waiting for pod pod-73799737-d161-446c-8a22-365a4b69b972 to disappear Oct 15 00:09:19.283: INFO: Pod pod-73799737-d161-446c-8a22-365a4b69b972 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:09:19.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5294" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":237,"skipped":3850,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:09:19.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 15 00:09:20.397: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 15 00:09:22.548: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317360, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317360, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317360, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738317360, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 15 00:09:25.578: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:09:25.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6262" for this suite. STEP: Destroying namespace "webhook-6262-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.435 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":238,"skipped":3855,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:09:25.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2079.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2079.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2079.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2079.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2079.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2079.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 15 00:09:31.857: INFO: DNS probes using dns-2079/dns-test-2e60a8f8-6780-4ab6-b1c4-84ec2238659a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:09:31.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2079" for this suite. • [SLOW TEST:6.282 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":239,"skipped":3863,"failed":0} S ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:09:32.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3271 Oct 15 00:09:38.405: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3271 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 15 00:09:38.637: INFO: stderr: "I1015 00:09:38.533096 3412 log.go:181] (0xc000add130) (0xc000b065a0) Create stream\nI1015 00:09:38.533155 3412 log.go:181] (0xc000add130) (0xc000b065a0) Stream added, broadcasting: 1\nI1015 00:09:38.535725 3412 log.go:181] (0xc000add130) Reply frame received for 1\nI1015 00:09:38.535768 3412 log.go:181] (0xc000add130) (0xc000b06640) Create stream\nI1015 00:09:38.535780 3412 log.go:181] (0xc000add130) (0xc000b06640) Stream added, broadcasting: 3\nI1015 00:09:38.536709 3412 log.go:181] (0xc000add130) Reply frame received for 3\nI1015 00:09:38.536745 3412 log.go:181] (0xc000add130) (0xc0005b2000) Create stream\nI1015 00:09:38.536758 3412 log.go:181] (0xc000add130) (0xc0005b2000) Stream added, broadcasting: 5\nI1015 00:09:38.537833 3412 log.go:181] (0xc000add130) Reply frame received for 5\nI1015 00:09:38.617833 3412 log.go:181] (0xc000add130) Data frame received for 5\nI1015 00:09:38.617860 3412 log.go:181] (0xc0005b2000) (5) Data frame handling\nI1015 00:09:38.617875 3412 log.go:181] (0xc0005b2000) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1015 00:09:38.632533 3412 log.go:181] (0xc000add130) Data frame received for 3\nI1015 00:09:38.632570 3412 log.go:181] (0xc000b06640) (3) Data frame handling\nI1015 00:09:38.632583 3412 log.go:181] (0xc000b06640) (3) Data frame sent\nI1015 00:09:38.632591 3412 log.go:181] (0xc000add130) Data frame received for 3\nI1015 00:09:38.632597 3412 log.go:181] (0xc000b06640) (3) Data frame handling\nI1015 00:09:38.632621 3412 log.go:181] (0xc000add130) Data frame received for 1\nI1015 00:09:38.632631 3412 log.go:181] (0xc000b065a0) (1) Data frame handling\nI1015 00:09:38.632646 3412 log.go:181] (0xc000b065a0) (1) Data frame sent\nI1015 00:09:38.632663 3412 log.go:181] (0xc000add130) (0xc000b065a0) Stream removed, broadcasting: 1\nI1015 00:09:38.632737 3412 log.go:181] (0xc000add130) Data frame received for 5\nI1015 00:09:38.632774 3412 log.go:181] (0xc0005b2000) (5) Data frame handling\nI1015 00:09:38.632797 3412 log.go:181] (0xc000add130) Go away received\nI1015 00:09:38.633093 3412 log.go:181] (0xc000add130) (0xc000b065a0) Stream removed, broadcasting: 1\nI1015 00:09:38.633113 3412 log.go:181] (0xc000add130) (0xc000b06640) Stream removed, broadcasting: 3\nI1015 00:09:38.633122 3412 log.go:181] (0xc000add130) (0xc0005b2000) Stream removed, broadcasting: 5\n" Oct 15 00:09:38.637: INFO: stdout: "iptables" Oct 15 00:09:38.637: INFO: proxyMode: iptables Oct 15 00:09:38.642: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 15 00:09:38.661: INFO: Pod kube-proxy-mode-detector still exists Oct 15 00:09:40.661: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 15 00:09:40.665: INFO: Pod kube-proxy-mode-detector still exists Oct 15 00:09:42.661: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 15 00:09:42.665: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-3271 STEP: creating replication controller affinity-clusterip-timeout in namespace services-3271 I1015 00:09:42.742031 7 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-3271, replica count: 3 I1015 00:09:45.792461 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1015 00:09:48.792697 7 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 15 00:09:48.800: INFO: Creating new exec pod Oct 15 00:09:53.824: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3271 execpod-affinity8l6ft -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Oct 15 00:09:54.055: INFO: stderr: "I1015 00:09:53.962464 3430 log.go:181] (0xc000cc0fd0) (0xc000c96aa0) Create stream\nI1015 00:09:53.962518 3430 log.go:181] (0xc000cc0fd0) (0xc000c96aa0) Stream added, broadcasting: 1\nI1015 00:09:53.967490 3430 log.go:181] (0xc000cc0fd0) Reply frame received for 1\nI1015 00:09:53.967536 3430 log.go:181] (0xc000cc0fd0) (0xc000c96000) Create stream\nI1015 00:09:53.967549 3430 log.go:181] (0xc000cc0fd0) (0xc000c96000) Stream added, broadcasting: 3\nI1015 00:09:53.968553 3430 log.go:181] (0xc000cc0fd0) Reply frame received for 3\nI1015 00:09:53.968626 3430 log.go:181] (0xc000cc0fd0) (0xc00059c1e0) Create stream\nI1015 00:09:53.968663 3430 log.go:181] (0xc000cc0fd0) (0xc00059c1e0) Stream added, broadcasting: 5\nI1015 00:09:53.969759 3430 log.go:181] (0xc000cc0fd0) Reply frame received for 5\nI1015 00:09:54.045907 3430 log.go:181] (0xc000cc0fd0) Data frame received for 5\nI1015 00:09:54.045942 3430 log.go:181] (0xc00059c1e0) (5) Data frame handling\nI1015 00:09:54.045963 3430 log.go:181] (0xc00059c1e0) (5) Data frame sent\nI1015 00:09:54.045971 3430 log.go:181] (0xc000cc0fd0) Data frame received for 5\nI1015 00:09:54.045975 3430 log.go:181] (0xc00059c1e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI1015 00:09:54.046032 3430 log.go:181] (0xc00059c1e0) (5) Data frame sent\nI1015 00:09:54.046351 3430 log.go:181] (0xc000cc0fd0) Data frame received for 5\nI1015 00:09:54.046373 3430 log.go:181] (0xc00059c1e0) (5) Data frame handling\nI1015 00:09:54.046554 3430 log.go:181] (0xc000cc0fd0) Data frame received for 3\nI1015 00:09:54.046575 3430 log.go:181] (0xc000c96000) (3) Data frame handling\nI1015 00:09:54.048356 3430 log.go:181] (0xc000cc0fd0) Data frame received for 1\nI1015 00:09:54.048382 3430 log.go:181] (0xc000c96aa0) (1) Data frame handling\nI1015 00:09:54.048398 3430 log.go:181] (0xc000c96aa0) (1) Data frame sent\nI1015 00:09:54.048419 3430 log.go:181] (0xc000cc0fd0) (0xc000c96aa0) Stream removed, broadcasting: 1\nI1015 00:09:54.048447 3430 log.go:181] (0xc000cc0fd0) Go away received\nI1015 00:09:54.048709 3430 log.go:181] (0xc000cc0fd0) (0xc000c96aa0) Stream removed, broadcasting: 1\nI1015 00:09:54.048721 3430 log.go:181] (0xc000cc0fd0) (0xc000c96000) Stream removed, broadcasting: 3\nI1015 00:09:54.048727 3430 log.go:181] (0xc000cc0fd0) (0xc00059c1e0) Stream removed, broadcasting: 5\n" Oct 15 00:09:54.055: INFO: stdout: "" Oct 15 00:09:54.056: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3271 execpod-affinity8l6ft -- /bin/sh -x -c nc -zv -t -w 2 10.98.86.124 80' Oct 15 00:09:54.280: INFO: stderr: "I1015 00:09:54.189072 3448 log.go:181] (0xc000e94fd0) (0xc00033e6e0) Create stream\nI1015 00:09:54.189171 3448 log.go:181] (0xc000e94fd0) (0xc00033e6e0) Stream added, broadcasting: 1\nI1015 00:09:54.193396 3448 log.go:181] (0xc000e94fd0) Reply frame received for 1\nI1015 00:09:54.193445 3448 log.go:181] (0xc000e94fd0) (0xc000c18000) Create stream\nI1015 00:09:54.193456 3448 log.go:181] (0xc000e94fd0) (0xc000c18000) Stream added, broadcasting: 3\nI1015 00:09:54.194193 3448 log.go:181] (0xc000e94fd0) Reply frame received for 3\nI1015 00:09:54.194225 3448 log.go:181] (0xc000e94fd0) (0xc000a84dc0) Create stream\nI1015 00:09:54.194235 3448 log.go:181] (0xc000e94fd0) (0xc000a84dc0) Stream added, broadcasting: 5\nI1015 00:09:54.195145 3448 log.go:181] (0xc000e94fd0) Reply frame received for 5\nI1015 00:09:54.273116 3448 log.go:181] (0xc000e94fd0) Data frame received for 5\nI1015 00:09:54.273175 3448 log.go:181] (0xc000a84dc0) (5) Data frame handling\nI1015 00:09:54.273193 3448 log.go:181] (0xc000a84dc0) (5) Data frame sent\nI1015 00:09:54.273206 3448 log.go:181] (0xc000e94fd0) Data frame received for 5\nI1015 00:09:54.273218 3448 log.go:181] (0xc000a84dc0) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.86.124 80\nConnection to 10.98.86.124 80 port [tcp/http] succeeded!\nI1015 00:09:54.273258 3448 log.go:181] (0xc000e94fd0) Data frame received for 3\nI1015 00:09:54.273269 3448 log.go:181] (0xc000c18000) (3) Data frame handling\nI1015 00:09:54.275434 3448 log.go:181] (0xc000e94fd0) Data frame received for 1\nI1015 00:09:54.275455 3448 log.go:181] (0xc00033e6e0) (1) Data frame handling\nI1015 00:09:54.275471 3448 log.go:181] (0xc00033e6e0) (1) Data frame sent\nI1015 00:09:54.275911 3448 log.go:181] (0xc000e94fd0) (0xc00033e6e0) Stream removed, broadcasting: 1\nI1015 00:09:54.275948 3448 log.go:181] (0xc000e94fd0) Go away received\nI1015 00:09:54.276241 3448 log.go:181] (0xc000e94fd0) (0xc00033e6e0) Stream removed, broadcasting: 1\nI1015 00:09:54.276276 3448 log.go:181] (0xc000e94fd0) (0xc000c18000) Stream removed, broadcasting: 3\nI1015 00:09:54.276286 3448 log.go:181] (0xc000e94fd0) (0xc000a84dc0) Stream removed, broadcasting: 5\n" Oct 15 00:09:54.280: INFO: stdout: "" Oct 15 00:09:54.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3271 execpod-affinity8l6ft -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.86.124:80/ ; done' Oct 15 00:09:54.594: INFO: stderr: "I1015 00:09:54.419303 3467 log.go:181] (0xc000098000) (0xc00051e0a0) Create stream\nI1015 00:09:54.419376 3467 log.go:181] (0xc000098000) (0xc00051e0a0) Stream added, broadcasting: 1\nI1015 00:09:54.421431 3467 log.go:181] (0xc000098000) Reply frame received for 1\nI1015 00:09:54.421470 3467 log.go:181] (0xc000098000) (0xc00051e140) Create stream\nI1015 00:09:54.421483 3467 log.go:181] (0xc000098000) (0xc00051e140) Stream added, broadcasting: 3\nI1015 00:09:54.422443 3467 log.go:181] (0xc000098000) Reply frame received for 3\nI1015 00:09:54.422483 3467 log.go:181] (0xc000098000) (0xc0003d52c0) Create stream\nI1015 00:09:54.422498 3467 log.go:181] (0xc000098000) (0xc0003d52c0) Stream added, broadcasting: 5\nI1015 00:09:54.423347 3467 log.go:181] (0xc000098000) Reply frame received for 5\nI1015 00:09:54.495373 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.495406 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.495428 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.495474 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.495521 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.495548 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.499619 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.499634 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.499642 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.500644 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.500673 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.500693 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.503624 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.503655 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.503678 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.506615 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.506632 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.506645 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.507357 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.507391 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.507405 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.507429 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.507446 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.507476 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.511866 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.511890 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.511912 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.512527 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.512548 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.512561 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.512576 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.512594 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.512629 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.516367 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.516387 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.516404 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.516799 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.516828 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.517064 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.517085 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.517099 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.517116 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.523360 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.523410 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.523439 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.523892 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.523917 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.523927 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.523942 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.523961 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.523970 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.527195 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.527223 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.527247 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.527413 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.527428 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.527439 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I1015 00:09:54.527739 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.527770 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.527787 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.527818 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.527831 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.527850 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n http://10.98.86.124:80/\nI1015 00:09:54.532375 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.532407 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.532439 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.533593 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.533619 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.533633 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.533651 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.533661 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.533672 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.538138 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.538161 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.538185 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.538369 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.538397 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.538431 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.538469 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.538504 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.538540 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.544163 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.544183 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.544205 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.545217 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.545238 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.545253 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.545273 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.545283 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.545293 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.549729 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.549751 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.549767 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.550878 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.550895 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.550909 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.550922 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.550931 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.550941 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.555464 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.555493 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.555538 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.555778 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.555802 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.555819 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.555841 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.555852 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.555861 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.559727 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.559745 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.559756 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.560784 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.560813 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.560968 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.561002 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.561016 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.561039 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.564701 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.564720 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.564750 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.565458 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.565484 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.565499 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.565517 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.565525 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.565534 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.569629 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.569648 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.569659 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.570638 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.570668 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.570679 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.570700 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.570713 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.570727 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.578308 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.578334 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.578361 3467 log.go:181] (0xc0003d52c0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.578384 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.578392 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.578408 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.578523 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.578546 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.578610 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.583199 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.583215 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.583227 3467 log.go:181] (0xc00051e140) (3) Data frame sent\nI1015 00:09:54.583907 3467 log.go:181] (0xc000098000) Data frame received for 3\nI1015 00:09:54.583926 3467 log.go:181] (0xc00051e140) (3) Data frame handling\nI1015 00:09:54.584277 3467 log.go:181] (0xc000098000) Data frame received for 5\nI1015 00:09:54.584290 3467 log.go:181] (0xc0003d52c0) (5) Data frame handling\nI1015 00:09:54.586263 3467 log.go:181] (0xc000098000) Data frame received for 1\nI1015 00:09:54.586278 3467 log.go:181] (0xc00051e0a0) (1) Data frame handling\nI1015 00:09:54.586291 3467 log.go:181] (0xc00051e0a0) (1) Data frame sent\nI1015 00:09:54.586422 3467 log.go:181] (0xc000098000) (0xc00051e0a0) Stream removed, broadcasting: 1\nI1015 00:09:54.586546 3467 log.go:181] (0xc000098000) Go away received\nI1015 00:09:54.586913 3467 log.go:181] (0xc000098000) (0xc00051e0a0) Stream removed, broadcasting: 1\nI1015 00:09:54.586933 3467 log.go:181] (0xc000098000) (0xc00051e140) Stream removed, broadcasting: 3\nI1015 00:09:54.586948 3467 log.go:181] (0xc000098000) (0xc0003d52c0) Stream removed, broadcasting: 5\n" Oct 15 00:09:54.595: INFO: stdout: "\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx\naffinity-clusterip-timeout-zbrwx" Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Received response from host: affinity-clusterip-timeout-zbrwx Oct 15 00:09:54.595: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3271 execpod-affinity8l6ft -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.98.86.124:80/' Oct 15 00:09:54.833: INFO: stderr: "I1015 00:09:54.733654 3485 log.go:181] (0xc000c50dc0) (0xc0007a4aa0) Create stream\nI1015 00:09:54.733712 3485 log.go:181] (0xc000c50dc0) (0xc0007a4aa0) Stream added, broadcasting: 1\nI1015 00:09:54.738241 3485 log.go:181] (0xc000c50dc0) Reply frame received for 1\nI1015 00:09:54.738279 3485 log.go:181] (0xc000c50dc0) (0xc000736960) Create stream\nI1015 00:09:54.738291 3485 log.go:181] (0xc000c50dc0) (0xc000736960) Stream added, broadcasting: 3\nI1015 00:09:54.738943 3485 log.go:181] (0xc000c50dc0) Reply frame received for 3\nI1015 00:09:54.738972 3485 log.go:181] (0xc000c50dc0) (0xc0007a4b40) Create stream\nI1015 00:09:54.738984 3485 log.go:181] (0xc000c50dc0) (0xc0007a4b40) Stream added, broadcasting: 5\nI1015 00:09:54.739625 3485 log.go:181] (0xc000c50dc0) Reply frame received for 5\nI1015 00:09:54.818415 3485 log.go:181] (0xc000c50dc0) Data frame received for 5\nI1015 00:09:54.818449 3485 log.go:181] (0xc0007a4b40) (5) Data frame handling\nI1015 00:09:54.818473 3485 log.go:181] (0xc0007a4b40) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:09:54.823901 3485 log.go:181] (0xc000c50dc0) Data frame received for 3\nI1015 00:09:54.823934 3485 log.go:181] (0xc000736960) (3) Data frame handling\nI1015 00:09:54.823967 3485 log.go:181] (0xc000736960) (3) Data frame sent\nI1015 00:09:54.825027 3485 log.go:181] (0xc000c50dc0) Data frame received for 5\nI1015 00:09:54.825089 3485 log.go:181] (0xc0007a4b40) (5) Data frame handling\nI1015 00:09:54.825125 3485 log.go:181] (0xc000c50dc0) Data frame received for 3\nI1015 00:09:54.825143 3485 log.go:181] (0xc000736960) (3) Data frame handling\nI1015 00:09:54.826979 3485 log.go:181] (0xc000c50dc0) Data frame received for 1\nI1015 00:09:54.827004 3485 log.go:181] (0xc0007a4aa0) (1) Data frame handling\nI1015 00:09:54.827028 3485 log.go:181] (0xc0007a4aa0) (1) Data frame sent\nI1015 00:09:54.827601 3485 log.go:181] (0xc000c50dc0) (0xc0007a4aa0) Stream removed, broadcasting: 1\nI1015 00:09:54.827653 3485 log.go:181] (0xc000c50dc0) Go away received\nI1015 00:09:54.828057 3485 log.go:181] (0xc000c50dc0) (0xc0007a4aa0) Stream removed, broadcasting: 1\nI1015 00:09:54.828078 3485 log.go:181] (0xc000c50dc0) (0xc000736960) Stream removed, broadcasting: 3\nI1015 00:09:54.828087 3485 log.go:181] (0xc000c50dc0) (0xc0007a4b40) Stream removed, broadcasting: 5\n" Oct 15 00:09:54.833: INFO: stdout: "affinity-clusterip-timeout-zbrwx" Oct 15 00:10:09.833: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3271 execpod-affinity8l6ft -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.98.86.124:80/' Oct 15 00:10:10.058: INFO: stderr: "I1015 00:10:09.973906 3503 log.go:181] (0xc000c9d290) (0xc000bcab40) Create stream\nI1015 00:10:09.973974 3503 log.go:181] (0xc000c9d290) (0xc000bcab40) Stream added, broadcasting: 1\nI1015 00:10:09.976247 3503 log.go:181] (0xc000c9d290) Reply frame received for 1\nI1015 00:10:09.976301 3503 log.go:181] (0xc000c9d290) (0xc0005ba320) Create stream\nI1015 00:10:09.976316 3503 log.go:181] (0xc000c9d290) (0xc0005ba320) Stream added, broadcasting: 3\nI1015 00:10:09.977424 3503 log.go:181] (0xc000c9d290) Reply frame received for 3\nI1015 00:10:09.977466 3503 log.go:181] (0xc000c9d290) (0xc000a423c0) Create stream\nI1015 00:10:09.977482 3503 log.go:181] (0xc000c9d290) (0xc000a423c0) Stream added, broadcasting: 5\nI1015 00:10:09.978347 3503 log.go:181] (0xc000c9d290) Reply frame received for 5\nI1015 00:10:10.043701 3503 log.go:181] (0xc000c9d290) Data frame received for 5\nI1015 00:10:10.043731 3503 log.go:181] (0xc000a423c0) (5) Data frame handling\nI1015 00:10:10.043751 3503 log.go:181] (0xc000a423c0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.98.86.124:80/\nI1015 00:10:10.048453 3503 log.go:181] (0xc000c9d290) Data frame received for 3\nI1015 00:10:10.048470 3503 log.go:181] (0xc0005ba320) (3) Data frame handling\nI1015 00:10:10.048480 3503 log.go:181] (0xc0005ba320) (3) Data frame sent\nI1015 00:10:10.049310 3503 log.go:181] (0xc000c9d290) Data frame received for 5\nI1015 00:10:10.049335 3503 log.go:181] (0xc000a423c0) (5) Data frame handling\nI1015 00:10:10.049366 3503 log.go:181] (0xc000c9d290) Data frame received for 3\nI1015 00:10:10.049380 3503 log.go:181] (0xc0005ba320) (3) Data frame handling\nI1015 00:10:10.050856 3503 log.go:181] (0xc000c9d290) Data frame received for 1\nI1015 00:10:10.050873 3503 log.go:181] (0xc000bcab40) (1) Data frame handling\nI1015 00:10:10.050885 3503 log.go:181] (0xc000bcab40) (1) Data frame sent\nI1015 00:10:10.050900 3503 log.go:181] (0xc000c9d290) (0xc000bcab40) Stream removed, broadcasting: 1\nI1015 00:10:10.050917 3503 log.go:181] (0xc000c9d290) Go away received\nI1015 00:10:10.051432 3503 log.go:181] (0xc000c9d290) (0xc000bcab40) Stream removed, broadcasting: 1\nI1015 00:10:10.051462 3503 log.go:181] (0xc000c9d290) (0xc0005ba320) Stream removed, broadcasting: 3\nI1015 00:10:10.051475 3503 log.go:181] (0xc000c9d290) (0xc000a423c0) Stream removed, broadcasting: 5\n" Oct 15 00:10:10.058: INFO: stdout: "affinity-clusterip-timeout-48ndx" Oct 15 00:10:10.058: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-3271, will wait for the garbage collector to delete the pods Oct 15 00:10:10.391: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 197.704801ms Oct 15 00:10:10.991: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 600.244336ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:10:20.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3271" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:48.441 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":240,"skipped":3864,"failed":0} SSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:10:20.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-5346, will wait for the garbage collector to delete the pods Oct 15 00:10:26.559: INFO: Deleting Job.batch foo took: 6.943551ms Oct 15 00:10:26.959: INFO: Terminating Job.batch foo pods took: 400.278002ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:11:10.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5346" for this suite. • [SLOW TEST:49.818 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":241,"skipped":3869,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:11:10.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:11:10.353: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config version' Oct 15 00:11:10.503: INFO: stderr: "" Oct 15 00:11:10.503: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19+\", GitVersion:\"v1.19.3-rc.0\", GitCommit:\"d60a97015628047ffba1adebed86432370c354bc\", GitTreeState:\"clean\", BuildDate:\"2020-09-16T14:01:27Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:11:10.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-711" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":242,"skipped":3879,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:11:10.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-026bb898-c9c7-4288-9431-a7fa41bec36d STEP: Creating a pod to test consume configMaps Oct 15 00:11:10.598: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58cda30a-fd5f-4927-8690-4b8e728d610b" in namespace "projected-9538" to be "Succeeded or Failed" Oct 15 00:11:10.602: INFO: Pod "pod-projected-configmaps-58cda30a-fd5f-4927-8690-4b8e728d610b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327096ms Oct 15 00:11:12.620: INFO: Pod "pod-projected-configmaps-58cda30a-fd5f-4927-8690-4b8e728d610b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021629643s Oct 15 00:11:14.625: INFO: Pod "pod-projected-configmaps-58cda30a-fd5f-4927-8690-4b8e728d610b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026732899s STEP: Saw pod success Oct 15 00:11:14.625: INFO: Pod "pod-projected-configmaps-58cda30a-fd5f-4927-8690-4b8e728d610b" satisfied condition "Succeeded or Failed" Oct 15 00:11:14.628: INFO: Trying to get logs from node leguer-worker2 pod pod-projected-configmaps-58cda30a-fd5f-4927-8690-4b8e728d610b container projected-configmap-volume-test: STEP: delete the pod Oct 15 00:11:14.682: INFO: Waiting for pod pod-projected-configmaps-58cda30a-fd5f-4927-8690-4b8e728d610b to disappear Oct 15 00:11:14.719: INFO: Pod pod-projected-configmaps-58cda30a-fd5f-4927-8690-4b8e728d610b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:11:14.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9538" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":243,"skipped":3894,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:11:14.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-3fb46392-7f77-472e-bcdb-782f43bd2d72 Oct 15 00:11:14.889: INFO: Pod name my-hostname-basic-3fb46392-7f77-472e-bcdb-782f43bd2d72: Found 0 pods out of 1 Oct 15 00:11:19.893: INFO: Pod name my-hostname-basic-3fb46392-7f77-472e-bcdb-782f43bd2d72: Found 1 pods out of 1 Oct 15 00:11:19.893: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3fb46392-7f77-472e-bcdb-782f43bd2d72" are running Oct 15 00:11:19.895: INFO: Pod "my-hostname-basic-3fb46392-7f77-472e-bcdb-782f43bd2d72-2mz9g" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-15 00:11:14 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-15 00:11:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-15 00:11:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-15 00:11:14 +0000 UTC Reason: Message:}]) Oct 15 00:11:19.896: INFO: Trying to dial the pod Oct 15 00:11:24.909: INFO: Controller my-hostname-basic-3fb46392-7f77-472e-bcdb-782f43bd2d72: Got expected result from replica 1 [my-hostname-basic-3fb46392-7f77-472e-bcdb-782f43bd2d72-2mz9g]: "my-hostname-basic-3fb46392-7f77-472e-bcdb-782f43bd2d72-2mz9g", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:11:24.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2691" for this suite. • [SLOW TEST:10.216 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":244,"skipped":3901,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:11:24.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 15 00:11:29.037: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-4037 PodName:var-expansion-935e32b0-90e6-49db-ba70-ec6c81454007 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:11:29.037: INFO: >>> kubeConfig: /root/.kube/config I1015 00:11:29.067908 7 log.go:181] (0xc0067ac370) (0xc002b4b220) Create stream I1015 00:11:29.067953 7 log.go:181] (0xc0067ac370) (0xc002b4b220) Stream added, broadcasting: 1 I1015 00:11:29.071659 7 log.go:181] (0xc0067ac370) Reply frame received for 1 I1015 00:11:29.071707 7 log.go:181] (0xc0067ac370) (0xc001252500) Create stream I1015 00:11:29.071721 7 log.go:181] (0xc0067ac370) (0xc001252500) Stream added, broadcasting: 3 I1015 00:11:29.072618 7 log.go:181] (0xc0067ac370) Reply frame received for 3 I1015 00:11:29.072664 7 log.go:181] (0xc0067ac370) (0xc001e00320) Create stream I1015 00:11:29.072681 7 log.go:181] (0xc0067ac370) (0xc001e00320) Stream added, broadcasting: 5 I1015 00:11:29.073782 7 log.go:181] (0xc0067ac370) Reply frame received for 5 I1015 00:11:29.157461 7 log.go:181] (0xc0067ac370) Data frame received for 3 I1015 00:11:29.157497 7 log.go:181] (0xc001252500) (3) Data frame handling I1015 00:11:29.157579 7 log.go:181] (0xc0067ac370) Data frame received for 5 I1015 00:11:29.157605 7 log.go:181] (0xc001e00320) (5) Data frame handling I1015 00:11:29.159132 7 log.go:181] (0xc0067ac370) Data frame received for 1 I1015 00:11:29.159167 7 log.go:181] (0xc002b4b220) (1) Data frame handling I1015 00:11:29.159194 7 log.go:181] (0xc002b4b220) (1) Data frame sent I1015 00:11:29.159219 7 log.go:181] (0xc0067ac370) (0xc002b4b220) Stream removed, broadcasting: 1 I1015 00:11:29.159245 7 log.go:181] (0xc0067ac370) Go away received I1015 00:11:29.159337 7 log.go:181] (0xc0067ac370) (0xc002b4b220) Stream removed, broadcasting: 1 I1015 00:11:29.159353 7 log.go:181] (0xc0067ac370) (0xc001252500) Stream removed, broadcasting: 3 I1015 00:11:29.159360 7 log.go:181] (0xc0067ac370) (0xc001e00320) Stream removed, broadcasting: 5 STEP: test for file in mounted path Oct 15 00:11:29.163: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-4037 PodName:var-expansion-935e32b0-90e6-49db-ba70-ec6c81454007 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:11:29.163: INFO: >>> kubeConfig: /root/.kube/config I1015 00:11:29.187425 7 log.go:181] (0xc0063ba630) (0xc001e006e0) Create stream I1015 00:11:29.187450 7 log.go:181] (0xc0063ba630) (0xc001e006e0) Stream added, broadcasting: 1 I1015 00:11:29.189305 7 log.go:181] (0xc0063ba630) Reply frame received for 1 I1015 00:11:29.189338 7 log.go:181] (0xc0063ba630) (0xc001e00780) Create stream I1015 00:11:29.189350 7 log.go:181] (0xc0063ba630) (0xc001e00780) Stream added, broadcasting: 3 I1015 00:11:29.190091 7 log.go:181] (0xc0063ba630) Reply frame received for 3 I1015 00:11:29.190121 7 log.go:181] (0xc0063ba630) (0xc0007f61e0) Create stream I1015 00:11:29.190132 7 log.go:181] (0xc0063ba630) (0xc0007f61e0) Stream added, broadcasting: 5 I1015 00:11:29.190952 7 log.go:181] (0xc0063ba630) Reply frame received for 5 I1015 00:11:29.250725 7 log.go:181] (0xc0063ba630) Data frame received for 5 I1015 00:11:29.250783 7 log.go:181] (0xc0007f61e0) (5) Data frame handling I1015 00:11:29.250838 7 log.go:181] (0xc0063ba630) Data frame received for 3 I1015 00:11:29.250872 7 log.go:181] (0xc001e00780) (3) Data frame handling I1015 00:11:29.252486 7 log.go:181] (0xc0063ba630) Data frame received for 1 I1015 00:11:29.252519 7 log.go:181] (0xc001e006e0) (1) Data frame handling I1015 00:11:29.252548 7 log.go:181] (0xc001e006e0) (1) Data frame sent I1015 00:11:29.252570 7 log.go:181] (0xc0063ba630) (0xc001e006e0) Stream removed, broadcasting: 1 I1015 00:11:29.252593 7 log.go:181] (0xc0063ba630) Go away received I1015 00:11:29.252756 7 log.go:181] (0xc0063ba630) (0xc001e006e0) Stream removed, broadcasting: 1 I1015 00:11:29.252779 7 log.go:181] (0xc0063ba630) (0xc001e00780) Stream removed, broadcasting: 3 I1015 00:11:29.252789 7 log.go:181] (0xc0063ba630) (0xc0007f61e0) Stream removed, broadcasting: 5 STEP: updating the annotation value Oct 15 00:11:29.764: INFO: Successfully updated pod "var-expansion-935e32b0-90e6-49db-ba70-ec6c81454007" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 15 00:11:29.770: INFO: Deleting pod "var-expansion-935e32b0-90e6-49db-ba70-ec6c81454007" in namespace "var-expansion-4037" Oct 15 00:11:29.776: INFO: Wait up to 5m0s for pod "var-expansion-935e32b0-90e6-49db-ba70-ec6c81454007" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:12:09.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4037" for this suite. • [SLOW TEST:44.856 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":245,"skipped":3909,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:12:09.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Oct 15 00:12:17.130: INFO: 0 pods remaining Oct 15 00:12:17.130: INFO: 0 pods has nil DeletionTimestamp Oct 15 00:12:17.130: INFO: Oct 15 00:12:17.641: INFO: 0 pods remaining Oct 15 00:12:17.641: INFO: 0 pods has nil DeletionTimestamp Oct 15 00:12:17.641: INFO: Oct 15 00:12:18.538: INFO: 0 pods remaining Oct 15 00:12:18.538: INFO: 0 pods has nil DeletionTimestamp Oct 15 00:12:18.538: INFO: STEP: Gathering metrics W1015 00:12:20.195548 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 15 00:13:22.220: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:13:22.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6423" for this suite. • [SLOW TEST:72.427 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":246,"skipped":3935,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:13:22.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-28d522ee-530c-4d55-b352-a131050f0222 STEP: Creating a pod to test consume secrets Oct 15 00:13:22.375: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86" in namespace "projected-5000" to be "Succeeded or Failed" Oct 15 00:13:22.385: INFO: Pod "pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86": Phase="Pending", Reason="", readiness=false. Elapsed: 10.271231ms Oct 15 00:13:24.394: INFO: Pod "pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019284694s Oct 15 00:13:26.399: INFO: Pod "pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023886935s Oct 15 00:13:28.403: INFO: Pod "pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027890733s STEP: Saw pod success Oct 15 00:13:28.403: INFO: Pod "pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86" satisfied condition "Succeeded or Failed" Oct 15 00:13:28.406: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86 container secret-volume-test: STEP: delete the pod Oct 15 00:13:28.472: INFO: Waiting for pod pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86 to disappear Oct 15 00:13:28.480: INFO: Pod pod-projected-secrets-636207f0-9c88-4bd6-a493-df855410df86 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:13:28.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5000" for this suite. • [SLOW TEST:6.258 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":247,"skipped":3938,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:13:28.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 15 00:13:28.603: INFO: Waiting up to 5m0s for pod "downward-api-cab55213-3356-458c-aab5-44e016993d4f" in namespace "downward-api-8395" to be "Succeeded or Failed" Oct 15 00:13:28.624: INFO: Pod "downward-api-cab55213-3356-458c-aab5-44e016993d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.993779ms Oct 15 00:13:30.694: INFO: Pod "downward-api-cab55213-3356-458c-aab5-44e016993d4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091012535s Oct 15 00:13:32.698: INFO: Pod "downward-api-cab55213-3356-458c-aab5-44e016993d4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095322302s STEP: Saw pod success Oct 15 00:13:32.698: INFO: Pod "downward-api-cab55213-3356-458c-aab5-44e016993d4f" satisfied condition "Succeeded or Failed" Oct 15 00:13:32.702: INFO: Trying to get logs from node leguer-worker2 pod downward-api-cab55213-3356-458c-aab5-44e016993d4f container dapi-container: STEP: delete the pod Oct 15 00:13:32.733: INFO: Waiting for pod downward-api-cab55213-3356-458c-aab5-44e016993d4f to disappear Oct 15 00:13:32.753: INFO: Pod downward-api-cab55213-3356-458c-aab5-44e016993d4f no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:13:32.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8395" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":248,"skipped":3959,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:13:32.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9000.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9000.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 15 00:13:38.947: INFO: DNS probes using dns-9000/dns-test-beeee7a1-45e8-494e-aaee-4b506f99d67c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:13:38.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9000" for this suite. • [SLOW TEST:6.450 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":249,"skipped":3963,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:13:39.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-d6c1b6b4-6479-4930-ba7c-2e4eea168b9b in namespace container-probe-471 Oct 15 00:13:45.581: INFO: Started pod liveness-d6c1b6b4-6479-4930-ba7c-2e4eea168b9b in namespace container-probe-471 STEP: checking the pod's current state and verifying that restartCount is present Oct 15 00:13:45.584: INFO: Initial restart count of pod liveness-d6c1b6b4-6479-4930-ba7c-2e4eea168b9b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:17:46.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-471" for this suite. • [SLOW TEST:247.083 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":250,"skipped":3971,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:17:46.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 15 00:17:51.277: INFO: Successfully updated pod "annotationupdate3dae98cb-737c-43ff-8b45-232e5f474924" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:17:53.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6263" for this suite. • [SLOW TEST:7.009 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":251,"skipped":4035,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:17:53.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Oct 15 00:17:53.390: INFO: Waiting up to 5m0s for pod "var-expansion-aeb08b40-a7a7-4a38-96c8-53b02cc4b024" in namespace "var-expansion-3951" to be "Succeeded or Failed" Oct 15 00:17:53.396: INFO: Pod "var-expansion-aeb08b40-a7a7-4a38-96c8-53b02cc4b024": Phase="Pending", Reason="", readiness=false. Elapsed: 6.254726ms Oct 15 00:17:55.401: INFO: Pod "var-expansion-aeb08b40-a7a7-4a38-96c8-53b02cc4b024": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010799555s Oct 15 00:17:57.406: INFO: Pod "var-expansion-aeb08b40-a7a7-4a38-96c8-53b02cc4b024": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015898921s STEP: Saw pod success Oct 15 00:17:57.406: INFO: Pod "var-expansion-aeb08b40-a7a7-4a38-96c8-53b02cc4b024" satisfied condition "Succeeded or Failed" Oct 15 00:17:57.410: INFO: Trying to get logs from node leguer-worker2 pod var-expansion-aeb08b40-a7a7-4a38-96c8-53b02cc4b024 container dapi-container: STEP: delete the pod Oct 15 00:17:57.450: INFO: Waiting for pod var-expansion-aeb08b40-a7a7-4a38-96c8-53b02cc4b024 to disappear Oct 15 00:17:57.462: INFO: Pod var-expansion-aeb08b40-a7a7-4a38-96c8-53b02cc4b024 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:17:57.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3951" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":252,"skipped":4053,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:17:57.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-abfdca66-aff2-4202-a4c3-cc5299b9a369 in namespace container-probe-5219 Oct 15 00:18:01.587: INFO: Started pod busybox-abfdca66-aff2-4202-a4c3-cc5299b9a369 in namespace container-probe-5219 STEP: checking the pod's current state and verifying that restartCount is present Oct 15 00:18:01.590: INFO: Initial restart count of pod busybox-abfdca66-aff2-4202-a4c3-cc5299b9a369 is 0 Oct 15 00:18:53.736: INFO: Restart count of pod container-probe-5219/busybox-abfdca66-aff2-4202-a4c3-cc5299b9a369 is now 1 (52.146586879s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:18:53.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5219" for this suite. • [SLOW TEST:56.302 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":253,"skipped":4057,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:18:53.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:18:53.859: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 15 00:18:58.865: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 15 00:18:58.866: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 15 00:19:02.995: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3469 /apis/apps/v1/namespaces/deployment-3469/deployments/test-cleanup-deployment cbbc38c6-3f0b-4689-a529-b9c04b863c85 2969518 1 2020-10-15 00:18:58 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2020-10-15 00:18:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-15 00:19:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fde838 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-15 00:18:59 +0000 UTC,LastTransitionTime:2020-10-15 00:18:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5d446bdd47" has successfully progressed.,LastUpdateTime:2020-10-15 00:19:01 +0000 UTC,LastTransitionTime:2020-10-15 00:18:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 15 00:19:02.999: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-3469 /apis/apps/v1/namespaces/deployment-3469/replicasets/test-cleanup-deployment-5d446bdd47 37cd8b35-ef6b-4a11-a2a1-509729938702 2969507 1 2020-10-15 00:18:58 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment cbbc38c6-3f0b-4689-a529-b9c04b863c85 0xc003fdec77 0xc003fdec78}] [] [{kube-controller-manager Update apps/v1 2020-10-15 00:19:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cbbc38c6-3f0b-4689-a529-b9c04b863c85\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fded08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 15 00:19:03.003: INFO: Pod "test-cleanup-deployment-5d446bdd47-8nhbp" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-8nhbp test-cleanup-deployment-5d446bdd47- deployment-3469 /api/v1/namespaces/deployment-3469/pods/test-cleanup-deployment-5d446bdd47-8nhbp 4f9b4350-87c0-4913-9eea-2aac8ca4369a 2969506 0 2020-10-15 00:18:58 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 37cd8b35-ef6b-4a11-a2a1-509729938702 0xc004f68297 0xc004f68298}] [] [{kube-controller-manager Update v1 2020-10-15 00:18:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"37cd8b35-ef6b-4a11-a2a1-509729938702\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-15 00:19:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.121\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f7fcb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f7fcb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f7fcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:leguer-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-15 00:18:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-15 00:19:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-15 00:19:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-15 00:18:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.17,PodIP:10.244.1.121,StartTime:2020-10-15 00:18:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-15 00:19:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://d56fa479fb433d998152c131256cff300ad8fd10b967c7534b84759eabef6e28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.121,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:19:03.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3469" for this suite. • [SLOW TEST:9.235 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":254,"skipped":4061,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:19:03.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7317 STEP: creating service affinity-clusterip-transition in namespace services-7317 STEP: creating replication controller affinity-clusterip-transition in namespace services-7317 I1015 00:19:03.086709 7 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-7317, replica count: 3 I1015 00:19:06.137146 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1015 00:19:09.137304 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1015 00:19:12.137562 7 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 15 00:19:12.144: INFO: Creating new exec pod Oct 15 00:19:17.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-7317 execpod-affinityb7j86 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Oct 15 00:19:20.425: INFO: stderr: "I1015 00:19:20.325300 3538 log.go:181] (0xc000304dc0) (0xc0001421e0) Create stream\nI1015 00:19:20.325357 3538 log.go:181] (0xc000304dc0) (0xc0001421e0) Stream added, broadcasting: 1\nI1015 00:19:20.327629 3538 log.go:181] (0xc000304dc0) Reply frame received for 1\nI1015 00:19:20.327678 3538 log.go:181] (0xc000304dc0) (0xc0008943c0) Create stream\nI1015 00:19:20.327691 3538 log.go:181] (0xc000304dc0) (0xc0008943c0) Stream added, broadcasting: 3\nI1015 00:19:20.328618 3538 log.go:181] (0xc000304dc0) Reply frame received for 3\nI1015 00:19:20.328652 3538 log.go:181] (0xc000304dc0) (0xc0006041e0) Create stream\nI1015 00:19:20.328660 3538 log.go:181] (0xc000304dc0) (0xc0006041e0) Stream added, broadcasting: 5\nI1015 00:19:20.329838 3538 log.go:181] (0xc000304dc0) Reply frame received for 5\nI1015 00:19:20.417550 3538 log.go:181] (0xc000304dc0) Data frame received for 5\nI1015 00:19:20.417597 3538 log.go:181] (0xc0006041e0) (5) Data frame handling\nI1015 00:19:20.417626 3538 log.go:181] (0xc0006041e0) (5) Data frame sent\nI1015 00:19:20.417638 3538 log.go:181] (0xc000304dc0) Data frame received for 5\nI1015 00:19:20.417654 3538 log.go:181] (0xc0006041e0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI1015 00:19:20.417694 3538 log.go:181] (0xc0006041e0) (5) Data frame sent\nI1015 00:19:20.418004 3538 log.go:181] (0xc000304dc0) Data frame received for 3\nI1015 00:19:20.418060 3538 log.go:181] (0xc0008943c0) (3) Data frame handling\nI1015 00:19:20.418110 3538 log.go:181] (0xc000304dc0) Data frame received for 5\nI1015 00:19:20.418153 3538 log.go:181] (0xc0006041e0) (5) Data frame handling\nI1015 00:19:20.419683 3538 log.go:181] (0xc000304dc0) Data frame received for 1\nI1015 00:19:20.419701 3538 log.go:181] (0xc0001421e0) (1) Data frame handling\nI1015 00:19:20.419707 3538 log.go:181] (0xc0001421e0) (1) Data frame sent\nI1015 00:19:20.419716 3538 log.go:181] (0xc000304dc0) (0xc0001421e0) Stream removed, broadcasting: 1\nI1015 00:19:20.419727 3538 log.go:181] (0xc000304dc0) Go away received\nI1015 00:19:20.420394 3538 log.go:181] (0xc000304dc0) (0xc0001421e0) Stream removed, broadcasting: 1\nI1015 00:19:20.420425 3538 log.go:181] (0xc000304dc0) (0xc0008943c0) Stream removed, broadcasting: 3\nI1015 00:19:20.420440 3538 log.go:181] (0xc000304dc0) (0xc0006041e0) Stream removed, broadcasting: 5\n" Oct 15 00:19:20.425: INFO: stdout: "" Oct 15 00:19:20.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-7317 execpod-affinityb7j86 -- /bin/sh -x -c nc -zv -t -w 2 10.96.231.238 80' Oct 15 00:19:20.647: INFO: stderr: "I1015 00:19:20.571900 3555 log.go:181] (0xc0005e2dc0) (0xc000527f40) Create stream\nI1015 00:19:20.571958 3555 log.go:181] (0xc0005e2dc0) (0xc000527f40) Stream added, broadcasting: 1\nI1015 00:19:20.577579 3555 log.go:181] (0xc0005e2dc0) Reply frame received for 1\nI1015 00:19:20.577644 3555 log.go:181] (0xc0005e2dc0) (0xc000626140) Create stream\nI1015 00:19:20.577663 3555 log.go:181] (0xc0005e2dc0) (0xc000626140) Stream added, broadcasting: 3\nI1015 00:19:20.579021 3555 log.go:181] (0xc0005e2dc0) Reply frame received for 3\nI1015 00:19:20.579084 3555 log.go:181] (0xc0005e2dc0) (0xc0005261e0) Create stream\nI1015 00:19:20.579117 3555 log.go:181] (0xc0005e2dc0) (0xc0005261e0) Stream added, broadcasting: 5\nI1015 00:19:20.580199 3555 log.go:181] (0xc0005e2dc0) Reply frame received for 5\nI1015 00:19:20.640628 3555 log.go:181] (0xc0005e2dc0) Data frame received for 5\nI1015 00:19:20.640665 3555 log.go:181] (0xc0005261e0) (5) Data frame handling\nI1015 00:19:20.640675 3555 log.go:181] (0xc0005261e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.231.238 80\nConnection to 10.96.231.238 80 port [tcp/http] succeeded!\nI1015 00:19:20.640701 3555 log.go:181] (0xc0005e2dc0) Data frame received for 3\nI1015 00:19:20.640730 3555 log.go:181] (0xc000626140) (3) Data frame handling\nI1015 00:19:20.640751 3555 log.go:181] (0xc0005e2dc0) Data frame received for 5\nI1015 00:19:20.640763 3555 log.go:181] (0xc0005261e0) (5) Data frame handling\nI1015 00:19:20.641695 3555 log.go:181] (0xc0005e2dc0) Data frame received for 1\nI1015 00:19:20.641724 3555 log.go:181] (0xc000527f40) (1) Data frame handling\nI1015 00:19:20.641742 3555 log.go:181] (0xc000527f40) (1) Data frame sent\nI1015 00:19:20.641760 3555 log.go:181] (0xc0005e2dc0) (0xc000527f40) Stream removed, broadcasting: 1\nI1015 00:19:20.641784 3555 log.go:181] (0xc0005e2dc0) Go away received\nI1015 00:19:20.642193 3555 log.go:181] (0xc0005e2dc0) (0xc000527f40) Stream removed, broadcasting: 1\nI1015 00:19:20.642206 3555 log.go:181] (0xc0005e2dc0) (0xc000626140) Stream removed, broadcasting: 3\nI1015 00:19:20.642211 3555 log.go:181] (0xc0005e2dc0) (0xc0005261e0) Stream removed, broadcasting: 5\n" Oct 15 00:19:20.647: INFO: stdout: "" Oct 15 00:19:20.653: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-7317 execpod-affinityb7j86 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.231.238:80/ ; done' Oct 15 00:19:20.995: INFO: stderr: "I1015 00:19:20.794580 3574 log.go:181] (0xc00003bb80) (0xc0005a0820) Create stream\nI1015 00:19:20.794639 3574 log.go:181] (0xc00003bb80) (0xc0005a0820) Stream added, broadcasting: 1\nI1015 00:19:20.797292 3574 log.go:181] (0xc00003bb80) Reply frame received for 1\nI1015 00:19:20.797327 3574 log.go:181] (0xc00003bb80) (0xc000c866e0) Create stream\nI1015 00:19:20.797350 3574 log.go:181] (0xc00003bb80) (0xc000c866e0) Stream added, broadcasting: 3\nI1015 00:19:20.798368 3574 log.go:181] (0xc00003bb80) Reply frame received for 3\nI1015 00:19:20.798406 3574 log.go:181] (0xc00003bb80) (0xc000c86780) Create stream\nI1015 00:19:20.798418 3574 log.go:181] (0xc00003bb80) (0xc000c86780) Stream added, broadcasting: 5\nI1015 00:19:20.799495 3574 log.go:181] (0xc00003bb80) Reply frame received for 5\nI1015 00:19:20.871875 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.871931 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.871974 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.871994 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.872012 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.872026 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.876579 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.876605 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.876632 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.877138 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.877154 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.877166 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.877183 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.877207 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.877226 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.883219 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.883246 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.883270 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.883916 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.883939 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.883948 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.883960 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.883966 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.883975 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.891500 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.891527 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.891551 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.892574 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.892587 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.892594 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.892613 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.892641 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.892664 3574 log.go:181] (0xc000c86780) (5) Data frame sent\nI1015 00:19:20.892677 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.892686 3574 log.go:181] (0xc000c86780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.892713 3574 log.go:181] (0xc000c86780) (5) Data frame sent\nI1015 00:19:20.899234 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.899262 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.899283 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.899742 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.899763 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.899781 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.899797 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.899814 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.899824 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.906474 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.906504 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.906520 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.907238 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.907256 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.907264 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.907275 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.907280 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.907286 3574 log.go:181] (0xc000c86780) (5) Data frame sent\nI1015 00:19:20.907292 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.907297 3574 log.go:181] (0xc000c86780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.907307 3574 log.go:181] (0xc000c86780) (5) Data frame sent\nI1015 00:19:20.914048 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.914075 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.914094 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.915016 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.915042 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.915057 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.915085 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.915098 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.915113 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.920728 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.920751 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.920767 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.921630 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.921656 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.921665 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.921675 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.921681 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.921686 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.927644 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.927682 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.927710 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.928317 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.928340 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.928350 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.928451 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.928475 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.928492 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.936989 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.937009 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.937021 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.937747 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.937790 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.937813 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.937845 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.937867 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.937896 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.945514 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.945531 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.945552 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.945881 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.945913 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.945931 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.945958 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.945972 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.945993 3574 log.go:181] (0xc000c86780) (5) Data frame sent\nI1015 00:19:20.946008 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.946019 3574 log.go:181] (0xc000c86780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.946042 3574 log.go:181] (0xc000c86780) (5) Data frame sent\nI1015 00:19:20.952181 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.952198 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.952208 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.952700 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.952720 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.952727 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.952746 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.952774 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.952802 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.958490 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.958506 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.958524 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.958963 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.959015 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.959058 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.959073 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.959091 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.959105 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.965573 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.965588 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.965596 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.966375 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.966408 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.966424 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.966445 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.966458 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.966470 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.970762 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.970780 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.970791 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.970998 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.971025 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.971036 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.971052 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.971061 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.971075 3574 log.go:181] (0xc000c86780) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.977886 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.977899 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.977905 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.978896 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.978920 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.978947 3574 log.go:181] (0xc000c86780) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:20.978972 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.979003 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.979025 3574 log.go:181] (0xc000c86780) (5) Data frame sent\nI1015 00:19:20.984738 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.984761 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.984780 3574 log.go:181] (0xc000c866e0) (3) Data frame sent\nI1015 00:19:20.985717 3574 log.go:181] (0xc00003bb80) Data frame received for 3\nI1015 00:19:20.985741 3574 log.go:181] (0xc000c866e0) (3) Data frame handling\nI1015 00:19:20.985764 3574 log.go:181] (0xc00003bb80) Data frame received for 5\nI1015 00:19:20.985788 3574 log.go:181] (0xc000c86780) (5) Data frame handling\nI1015 00:19:20.987667 3574 log.go:181] (0xc00003bb80) Data frame received for 1\nI1015 00:19:20.987685 3574 log.go:181] (0xc0005a0820) (1) Data frame handling\nI1015 00:19:20.987695 3574 log.go:181] (0xc0005a0820) (1) Data frame sent\nI1015 00:19:20.987804 3574 log.go:181] (0xc00003bb80) (0xc0005a0820) Stream removed, broadcasting: 1\nI1015 00:19:20.987826 3574 log.go:181] (0xc00003bb80) Go away received\nI1015 00:19:20.989173 3574 log.go:181] (0xc00003bb80) (0xc0005a0820) Stream removed, broadcasting: 1\nI1015 00:19:20.989231 3574 log.go:181] (0xc00003bb80) (0xc000c866e0) Stream removed, broadcasting: 3\nI1015 00:19:20.989252 3574 log.go:181] (0xc00003bb80) (0xc000c86780) Stream removed, broadcasting: 5\n" Oct 15 00:19:20.995: INFO: stdout: "\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-lzdpx\naffinity-clusterip-transition-lzdpx\naffinity-clusterip-transition-z45vk\naffinity-clusterip-transition-lzdpx\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-z45vk\naffinity-clusterip-transition-z45vk\naffinity-clusterip-transition-lzdpx\naffinity-clusterip-transition-z45vk\naffinity-clusterip-transition-z45vk\naffinity-clusterip-transition-lzdpx\naffinity-clusterip-transition-lzdpx\naffinity-clusterip-transition-lzdpx" Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-lzdpx Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-lzdpx Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-z45vk Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-lzdpx Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:20.995: INFO: Received response from host: affinity-clusterip-transition-z45vk Oct 15 00:19:20.996: INFO: Received response from host: affinity-clusterip-transition-z45vk Oct 15 00:19:20.996: INFO: Received response from host: affinity-clusterip-transition-lzdpx Oct 15 00:19:20.996: INFO: Received response from host: affinity-clusterip-transition-z45vk Oct 15 00:19:20.996: INFO: Received response from host: affinity-clusterip-transition-z45vk Oct 15 00:19:20.996: INFO: Received response from host: affinity-clusterip-transition-lzdpx Oct 15 00:19:20.996: INFO: Received response from host: affinity-clusterip-transition-lzdpx Oct 15 00:19:20.996: INFO: Received response from host: affinity-clusterip-transition-lzdpx Oct 15 00:19:21.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-7317 execpod-affinityb7j86 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.96.231.238:80/ ; done' Oct 15 00:19:21.321: INFO: stderr: "I1015 00:19:21.154064 3592 log.go:181] (0xc000c291e0) (0xc000726c80) Create stream\nI1015 00:19:21.154134 3592 log.go:181] (0xc000c291e0) (0xc000726c80) Stream added, broadcasting: 1\nI1015 00:19:21.157434 3592 log.go:181] (0xc000c291e0) Reply frame received for 1\nI1015 00:19:21.157467 3592 log.go:181] (0xc000c291e0) (0xc0006ce320) Create stream\nI1015 00:19:21.157482 3592 log.go:181] (0xc000c291e0) (0xc0006ce320) Stream added, broadcasting: 3\nI1015 00:19:21.158409 3592 log.go:181] (0xc000c291e0) Reply frame received for 3\nI1015 00:19:21.158468 3592 log.go:181] (0xc000c291e0) (0xc0009b2140) Create stream\nI1015 00:19:21.158507 3592 log.go:181] (0xc000c291e0) (0xc0009b2140) Stream added, broadcasting: 5\nI1015 00:19:21.159344 3592 log.go:181] (0xc000c291e0) Reply frame received for 5\nI1015 00:19:21.223225 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.223248 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.223269 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.223328 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.223343 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.223374 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\nI1015 00:19:21.229758 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.229774 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.229782 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.230334 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.230348 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.230369 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.230403 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.230415 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.230431 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.237629 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.237670 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.237699 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.238056 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.238079 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.238107 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.238121 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.238133 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.238144 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.242625 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.242656 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.242718 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.243066 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.243088 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.243111 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.243154 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.243167 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.243185 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.247471 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.247499 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.247514 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.248376 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.248418 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.248443 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.248475 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.248489 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.248532 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\nI1015 00:19:21.248562 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.248576 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.248609 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\nI1015 00:19:21.251806 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.251845 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.251880 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.252176 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.252192 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.252203 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.252285 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.252305 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.252323 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.257395 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.257411 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.257431 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.258258 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.258275 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.258288 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.258314 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.258341 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.258356 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.261259 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.261272 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.261278 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.261675 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.261695 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.261711 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.265477 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.265488 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.265494 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.267817 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.267830 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.267845 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.268651 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.268673 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.268688 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.268718 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.268762 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.268789 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.272298 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.272325 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.272342 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.272809 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.272828 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.272914 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.272938 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.272955 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.272976 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.280332 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.280358 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.280399 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.280816 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.280902 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.280918 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.280930 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.280937 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.280944 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.286225 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.286242 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.286253 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.286628 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.286646 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.286663 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.286690 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.286705 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.286717 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.292040 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.292075 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.292096 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.292558 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.292588 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.292610 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.292631 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.292664 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.292691 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.297814 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.297837 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.297852 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.298235 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.298271 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.298282 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.298304 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.298319 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.298328 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.302047 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.302072 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.302092 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.302389 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.302404 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.302421 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.302471 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.302489 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.302509 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.307165 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.307184 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.307204 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.307809 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.307835 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.307858 3592 log.go:181] (0xc0009b2140) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.96.231.238:80/\nI1015 00:19:21.307887 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.307917 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.307953 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.312076 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.312102 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.312128 3592 log.go:181] (0xc0006ce320) (3) Data frame sent\nI1015 00:19:21.312714 3592 log.go:181] (0xc000c291e0) Data frame received for 3\nI1015 00:19:21.312741 3592 log.go:181] (0xc0006ce320) (3) Data frame handling\nI1015 00:19:21.312964 3592 log.go:181] (0xc000c291e0) Data frame received for 5\nI1015 00:19:21.312995 3592 log.go:181] (0xc0009b2140) (5) Data frame handling\nI1015 00:19:21.315041 3592 log.go:181] (0xc000c291e0) Data frame received for 1\nI1015 00:19:21.315065 3592 log.go:181] (0xc000726c80) (1) Data frame handling\nI1015 00:19:21.315077 3592 log.go:181] (0xc000726c80) (1) Data frame sent\nI1015 00:19:21.315097 3592 log.go:181] (0xc000c291e0) (0xc000726c80) Stream removed, broadcasting: 1\nI1015 00:19:21.315122 3592 log.go:181] (0xc000c291e0) Go away received\nI1015 00:19:21.315537 3592 log.go:181] (0xc000c291e0) (0xc000726c80) Stream removed, broadcasting: 1\nI1015 00:19:21.315568 3592 log.go:181] (0xc000c291e0) (0xc0006ce320) Stream removed, broadcasting: 3\nI1015 00:19:21.315577 3592 log.go:181] (0xc000c291e0) (0xc0009b2140) Stream removed, broadcasting: 5\n" Oct 15 00:19:21.322: INFO: stdout: "\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln\naffinity-clusterip-transition-7pwln" Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Received response from host: affinity-clusterip-transition-7pwln Oct 15 00:19:21.322: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-7317, will wait for the garbage collector to delete the pods Oct 15 00:19:21.420: INFO: Deleting ReplicationController affinity-clusterip-transition took: 31.396362ms Oct 15 00:19:21.920: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.238282ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:19:30.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7317" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:27.570 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":255,"skipped":4094,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:19:30.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-04da9d71-8ed1-4ca9-8396-0aef92df40d0 STEP: Creating a pod to test consume secrets Oct 15 00:19:30.669: INFO: Waiting up to 5m0s for pod "pod-secrets-2a7872bd-7007-43a9-b515-f18d03df64b1" in namespace "secrets-5321" to be "Succeeded or Failed" Oct 15 00:19:30.713: INFO: Pod "pod-secrets-2a7872bd-7007-43a9-b515-f18d03df64b1": Phase="Pending", Reason="", readiness=false. Elapsed: 44.277325ms Oct 15 00:19:32.719: INFO: Pod "pod-secrets-2a7872bd-7007-43a9-b515-f18d03df64b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049700319s Oct 15 00:19:34.730: INFO: Pod "pod-secrets-2a7872bd-7007-43a9-b515-f18d03df64b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060599143s STEP: Saw pod success Oct 15 00:19:34.730: INFO: Pod "pod-secrets-2a7872bd-7007-43a9-b515-f18d03df64b1" satisfied condition "Succeeded or Failed" Oct 15 00:19:34.733: INFO: Trying to get logs from node leguer-worker pod pod-secrets-2a7872bd-7007-43a9-b515-f18d03df64b1 container secret-volume-test: STEP: delete the pod Oct 15 00:19:34.789: INFO: Waiting for pod pod-secrets-2a7872bd-7007-43a9-b515-f18d03df64b1 to disappear Oct 15 00:19:34.793: INFO: Pod pod-secrets-2a7872bd-7007-43a9-b515-f18d03df64b1 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:19:34.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5321" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4096,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:19:34.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:19:34.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4214" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":257,"skipped":4142,"failed":0} ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:19:34.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 15 00:19:35.101: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-a e74c7564-770a-4c58-873a-2a4c670d39d2 2969763 0 2020-10-15 00:19:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-15 00:19:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 15 00:19:35.101: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-a e74c7564-770a-4c58-873a-2a4c670d39d2 2969763 0 2020-10-15 00:19:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-15 00:19:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 15 00:19:45.108: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-a e74c7564-770a-4c58-873a-2a4c670d39d2 2969837 0 2020-10-15 00:19:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-15 00:19:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 15 00:19:45.108: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-a e74c7564-770a-4c58-873a-2a4c670d39d2 2969837 0 2020-10-15 00:19:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-15 00:19:45 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 15 00:19:55.118: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-a e74c7564-770a-4c58-873a-2a4c670d39d2 2969865 0 2020-10-15 00:19:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-15 00:19:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 15 00:19:55.118: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-a e74c7564-770a-4c58-873a-2a4c670d39d2 2969865 0 2020-10-15 00:19:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-15 00:19:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 15 00:20:05.127: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-a e74c7564-770a-4c58-873a-2a4c670d39d2 2969895 0 2020-10-15 00:19:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-15 00:19:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 15 00:20:05.127: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-a e74c7564-770a-4c58-873a-2a4c670d39d2 2969895 0 2020-10-15 00:19:35 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-15 00:19:55 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 15 00:20:15.136: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-b e426952d-8c11-46d1-84b6-369505c62ce5 2969925 0 2020-10-15 00:20:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-15 00:20:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 15 00:20:15.136: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-b e426952d-8c11-46d1-84b6-369505c62ce5 2969925 0 2020-10-15 00:20:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-15 00:20:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 15 00:20:25.143: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-b e426952d-8c11-46d1-84b6-369505c62ce5 2969955 0 2020-10-15 00:20:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-15 00:20:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 15 00:20:25.143: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3662 /api/v1/namespaces/watch-3662/configmaps/e2e-watch-test-configmap-b e426952d-8c11-46d1-84b6-369505c62ce5 2969955 0 2020-10-15 00:20:15 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-15 00:20:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:20:35.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3662" for this suite. • [SLOW TEST:60.184 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":258,"skipped":4142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:20:35.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 15 00:20:35.272: INFO: Waiting up to 5m0s for pod "downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481" in namespace "downward-api-6016" to be "Succeeded or Failed" Oct 15 00:20:35.275: INFO: Pod "downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481": Phase="Pending", Reason="", readiness=false. Elapsed: 3.493762ms Oct 15 00:20:37.323: INFO: Pod "downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051103825s Oct 15 00:20:39.328: INFO: Pod "downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481": Phase="Running", Reason="", readiness=true. Elapsed: 4.056387086s Oct 15 00:20:41.334: INFO: Pod "downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.061607059s STEP: Saw pod success Oct 15 00:20:41.334: INFO: Pod "downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481" satisfied condition "Succeeded or Failed" Oct 15 00:20:41.337: INFO: Trying to get logs from node leguer-worker pod downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481 container dapi-container: STEP: delete the pod Oct 15 00:20:41.361: INFO: Waiting for pod downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481 to disappear Oct 15 00:20:41.365: INFO: Pod downward-api-6177d75a-1322-4194-9c7f-8b6eb82f7481 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:20:41.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6016" for this suite. • [SLOW TEST:6.201 seconds] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":259,"skipped":4191,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:20:41.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-fe824fbe-8cff-4717-b775-285571399958 STEP: Creating a pod to test consume configMaps Oct 15 00:20:41.481: INFO: Waiting up to 5m0s for pod "pod-configmaps-49a92d5b-b40c-4db7-9d44-13e189610487" in namespace "configmap-667" to be "Succeeded or Failed" Oct 15 00:20:41.485: INFO: Pod "pod-configmaps-49a92d5b-b40c-4db7-9d44-13e189610487": Phase="Pending", Reason="", readiness=false. Elapsed: 3.949159ms Oct 15 00:20:43.489: INFO: Pod "pod-configmaps-49a92d5b-b40c-4db7-9d44-13e189610487": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008260767s Oct 15 00:20:45.494: INFO: Pod "pod-configmaps-49a92d5b-b40c-4db7-9d44-13e189610487": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012544494s STEP: Saw pod success Oct 15 00:20:45.494: INFO: Pod "pod-configmaps-49a92d5b-b40c-4db7-9d44-13e189610487" satisfied condition "Succeeded or Failed" Oct 15 00:20:45.497: INFO: Trying to get logs from node leguer-worker pod pod-configmaps-49a92d5b-b40c-4db7-9d44-13e189610487 container configmap-volume-test: STEP: delete the pod Oct 15 00:20:45.527: INFO: Waiting for pod pod-configmaps-49a92d5b-b40c-4db7-9d44-13e189610487 to disappear Oct 15 00:20:45.555: INFO: Pod pod-configmaps-49a92d5b-b40c-4db7-9d44-13e189610487 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:20:45.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-667" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":260,"skipped":4193,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:20:45.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-569d1fb8-7fde-4e42-a7c8-8b5cc796b689 in namespace container-probe-4905 Oct 15 00:20:49.982: INFO: Started pod liveness-569d1fb8-7fde-4e42-a7c8-8b5cc796b689 in namespace container-probe-4905 STEP: checking the pod's current state and verifying that restartCount is present Oct 15 00:20:49.984: INFO: Initial restart count of pod liveness-569d1fb8-7fde-4e42-a7c8-8b5cc796b689 is 0 Oct 15 00:21:08.061: INFO: Restart count of pod container-probe-4905/liveness-569d1fb8-7fde-4e42-a7c8-8b5cc796b689 is now 1 (18.077238938s elapsed) Oct 15 00:21:28.109: INFO: Restart count of pod container-probe-4905/liveness-569d1fb8-7fde-4e42-a7c8-8b5cc796b689 is now 2 (38.124935135s elapsed) Oct 15 00:21:48.168: INFO: Restart count of pod container-probe-4905/liveness-569d1fb8-7fde-4e42-a7c8-8b5cc796b689 is now 3 (58.183395804s elapsed) Oct 15 00:22:08.226: INFO: Restart count of pod container-probe-4905/liveness-569d1fb8-7fde-4e42-a7c8-8b5cc796b689 is now 4 (1m18.241985767s elapsed) Oct 15 00:23:10.414: INFO: Restart count of pod container-probe-4905/liveness-569d1fb8-7fde-4e42-a7c8-8b5cc796b689 is now 5 (2m20.429434413s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:23:10.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4905" for this suite. • [SLOW TEST:144.592 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":261,"skipped":4242,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:23:10.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3540.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3540.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 15 00:23:16.897: INFO: DNS probes using dns-test-1dedbec4-0e5a-4857-975a-36b5aa73f999 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3540.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3540.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 15 00:23:25.487: INFO: File wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:25.491: INFO: File jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:25.491: INFO: Lookups using dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 failed for: [wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local] Oct 15 00:23:30.496: INFO: File wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:30.498: INFO: File jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:30.498: INFO: Lookups using dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 failed for: [wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local] Oct 15 00:23:35.496: INFO: File wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:35.500: INFO: File jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:35.500: INFO: Lookups using dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 failed for: [wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local] Oct 15 00:23:40.495: INFO: File wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:40.499: INFO: File jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:40.499: INFO: Lookups using dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 failed for: [wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local] Oct 15 00:23:45.496: INFO: File wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:45.499: INFO: File jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local from pod dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 15 00:23:45.499: INFO: Lookups using dns-3540/dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 failed for: [wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local] Oct 15 00:23:50.499: INFO: DNS probes using dns-test-01976d1f-5e83-4214-b13e-5b36388115c0 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3540.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3540.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3540.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3540.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 15 00:23:59.177: INFO: DNS probes using dns-test-c6c5b6ad-7b1c-4df4-8944-ae8a8182673f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:23:59.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3540" for this suite. • [SLOW TEST:48.843 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":262,"skipped":4260,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:23:59.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-863c0761-6fd3-4f8b-99ac-cd92c3be6013 STEP: Creating a pod to test consume secrets Oct 15 00:23:59.764: INFO: Waiting up to 5m0s for pod "pod-secrets-c4f81eba-1b52-4a4c-b1f6-c4b5d483b51f" in namespace "secrets-6308" to be "Succeeded or Failed" Oct 15 00:23:59.838: INFO: Pod "pod-secrets-c4f81eba-1b52-4a4c-b1f6-c4b5d483b51f": Phase="Pending", Reason="", readiness=false. Elapsed: 73.754732ms Oct 15 00:24:01.842: INFO: Pod "pod-secrets-c4f81eba-1b52-4a4c-b1f6-c4b5d483b51f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077792146s Oct 15 00:24:03.846: INFO: Pod "pod-secrets-c4f81eba-1b52-4a4c-b1f6-c4b5d483b51f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081465842s STEP: Saw pod success Oct 15 00:24:03.846: INFO: Pod "pod-secrets-c4f81eba-1b52-4a4c-b1f6-c4b5d483b51f" satisfied condition "Succeeded or Failed" Oct 15 00:24:03.847: INFO: Trying to get logs from node leguer-worker2 pod pod-secrets-c4f81eba-1b52-4a4c-b1f6-c4b5d483b51f container secret-volume-test: STEP: delete the pod Oct 15 00:24:03.887: INFO: Waiting for pod pod-secrets-c4f81eba-1b52-4a4c-b1f6-c4b5d483b51f to disappear Oct 15 00:24:03.936: INFO: Pod pod-secrets-c4f81eba-1b52-4a4c-b1f6-c4b5d483b51f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:24:03.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6308" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":263,"skipped":4261,"failed":0} SS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:24:03.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 15 00:24:14.149: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:14.149: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:14.185514 7 log.go:181] (0xc0034364d0) (0xc001550dc0) Create stream I1015 00:24:14.185554 7 log.go:181] (0xc0034364d0) (0xc001550dc0) Stream added, broadcasting: 1 I1015 00:24:14.188005 7 log.go:181] (0xc0034364d0) Reply frame received for 1 I1015 00:24:14.188038 7 log.go:181] (0xc0034364d0) (0xc00343f7c0) Create stream I1015 00:24:14.188050 7 log.go:181] (0xc0034364d0) (0xc00343f7c0) Stream added, broadcasting: 3 I1015 00:24:14.189116 7 log.go:181] (0xc0034364d0) Reply frame received for 3 I1015 00:24:14.189161 7 log.go:181] (0xc0034364d0) (0xc003d65b80) Create stream I1015 00:24:14.189178 7 log.go:181] (0xc0034364d0) (0xc003d65b80) Stream added, broadcasting: 5 I1015 00:24:14.190011 7 log.go:181] (0xc0034364d0) Reply frame received for 5 I1015 00:24:14.247620 7 log.go:181] (0xc0034364d0) Data frame received for 3 I1015 00:24:14.247680 7 log.go:181] (0xc00343f7c0) (3) Data frame handling I1015 00:24:14.247702 7 log.go:181] (0xc00343f7c0) (3) Data frame sent I1015 00:24:14.247725 7 log.go:181] (0xc0034364d0) Data frame received for 3 I1015 00:24:14.247745 7 log.go:181] (0xc00343f7c0) (3) Data frame handling I1015 00:24:14.247803 7 log.go:181] (0xc0034364d0) Data frame received for 5 I1015 00:24:14.247850 7 log.go:181] (0xc003d65b80) (5) Data frame handling I1015 00:24:14.249643 7 log.go:181] (0xc0034364d0) Data frame received for 1 I1015 00:24:14.249682 7 log.go:181] (0xc001550dc0) (1) Data frame handling I1015 00:24:14.249722 7 log.go:181] (0xc001550dc0) (1) Data frame sent I1015 00:24:14.249747 7 log.go:181] (0xc0034364d0) (0xc001550dc0) Stream removed, broadcasting: 1 I1015 00:24:14.249774 7 log.go:181] (0xc0034364d0) Go away received I1015 00:24:14.249935 7 log.go:181] (0xc0034364d0) (0xc001550dc0) Stream removed, broadcasting: 1 I1015 00:24:14.249977 7 log.go:181] (0xc0034364d0) (0xc00343f7c0) Stream removed, broadcasting: 3 I1015 00:24:14.249999 7 log.go:181] (0xc0034364d0) (0xc003d65b80) Stream removed, broadcasting: 5 Oct 15 00:24:14.250: INFO: Exec stderr: "" Oct 15 00:24:14.250: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:14.250: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:14.281697 7 log.go:181] (0xc0001498c0) (0xc000155c20) Create stream I1015 00:24:14.281726 7 log.go:181] (0xc0001498c0) (0xc000155c20) Stream added, broadcasting: 1 I1015 00:24:14.285395 7 log.go:181] (0xc0001498c0) Reply frame received for 1 I1015 00:24:14.285438 7 log.go:181] (0xc0001498c0) (0xc002439040) Create stream I1015 00:24:14.285450 7 log.go:181] (0xc0001498c0) (0xc002439040) Stream added, broadcasting: 3 I1015 00:24:14.286722 7 log.go:181] (0xc0001498c0) Reply frame received for 3 I1015 00:24:14.286764 7 log.go:181] (0xc0001498c0) (0xc0000df360) Create stream I1015 00:24:14.286780 7 log.go:181] (0xc0001498c0) (0xc0000df360) Stream added, broadcasting: 5 I1015 00:24:14.287762 7 log.go:181] (0xc0001498c0) Reply frame received for 5 I1015 00:24:14.352079 7 log.go:181] (0xc0001498c0) Data frame received for 3 I1015 00:24:14.352116 7 log.go:181] (0xc002439040) (3) Data frame handling I1015 00:24:14.352137 7 log.go:181] (0xc002439040) (3) Data frame sent I1015 00:24:14.352154 7 log.go:181] (0xc0001498c0) Data frame received for 3 I1015 00:24:14.352170 7 log.go:181] (0xc002439040) (3) Data frame handling I1015 00:24:14.352210 7 log.go:181] (0xc0001498c0) Data frame received for 5 I1015 00:24:14.352236 7 log.go:181] (0xc0000df360) (5) Data frame handling I1015 00:24:14.354041 7 log.go:181] (0xc0001498c0) Data frame received for 1 I1015 00:24:14.354065 7 log.go:181] (0xc000155c20) (1) Data frame handling I1015 00:24:14.354087 7 log.go:181] (0xc000155c20) (1) Data frame sent I1015 00:24:14.354101 7 log.go:181] (0xc0001498c0) (0xc000155c20) Stream removed, broadcasting: 1 I1015 00:24:14.354197 7 log.go:181] (0xc0001498c0) (0xc000155c20) Stream removed, broadcasting: 1 I1015 00:24:14.354238 7 log.go:181] (0xc0001498c0) (0xc002439040) Stream removed, broadcasting: 3 I1015 00:24:14.354251 7 log.go:181] (0xc0001498c0) (0xc0000df360) Stream removed, broadcasting: 5 Oct 15 00:24:14.354: INFO: Exec stderr: "" Oct 15 00:24:14.354: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I1015 00:24:14.354339 7 log.go:181] (0xc0001498c0) Go away received Oct 15 00:24:14.354: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:14.390185 7 log.go:181] (0xc003946160) (0xc0024392c0) Create stream I1015 00:24:14.390211 7 log.go:181] (0xc003946160) (0xc0024392c0) Stream added, broadcasting: 1 I1015 00:24:14.392549 7 log.go:181] (0xc003946160) Reply frame received for 1 I1015 00:24:14.392595 7 log.go:181] (0xc003946160) (0xc003d65c20) Create stream I1015 00:24:14.392611 7 log.go:181] (0xc003946160) (0xc003d65c20) Stream added, broadcasting: 3 I1015 00:24:14.393683 7 log.go:181] (0xc003946160) Reply frame received for 3 I1015 00:24:14.393713 7 log.go:181] (0xc003946160) (0xc0000df400) Create stream I1015 00:24:14.393723 7 log.go:181] (0xc003946160) (0xc0000df400) Stream added, broadcasting: 5 I1015 00:24:14.394854 7 log.go:181] (0xc003946160) Reply frame received for 5 I1015 00:24:14.469633 7 log.go:181] (0xc003946160) Data frame received for 5 I1015 00:24:14.469690 7 log.go:181] (0xc0000df400) (5) Data frame handling I1015 00:24:14.469737 7 log.go:181] (0xc003946160) Data frame received for 3 I1015 00:24:14.469757 7 log.go:181] (0xc003d65c20) (3) Data frame handling I1015 00:24:14.469783 7 log.go:181] (0xc003d65c20) (3) Data frame sent I1015 00:24:14.469816 7 log.go:181] (0xc003946160) Data frame received for 3 I1015 00:24:14.469831 7 log.go:181] (0xc003d65c20) (3) Data frame handling I1015 00:24:14.471173 7 log.go:181] (0xc003946160) Data frame received for 1 I1015 00:24:14.471193 7 log.go:181] (0xc0024392c0) (1) Data frame handling I1015 00:24:14.471216 7 log.go:181] (0xc0024392c0) (1) Data frame sent I1015 00:24:14.471264 7 log.go:181] (0xc003946160) (0xc0024392c0) Stream removed, broadcasting: 1 I1015 00:24:14.471343 7 log.go:181] (0xc003946160) (0xc0024392c0) Stream removed, broadcasting: 1 I1015 00:24:14.471353 7 log.go:181] (0xc003946160) (0xc003d65c20) Stream removed, broadcasting: 3 I1015 00:24:14.471460 7 log.go:181] (0xc003946160) Go away received I1015 00:24:14.471501 7 log.go:181] (0xc003946160) (0xc0000df400) Stream removed, broadcasting: 5 Oct 15 00:24:14.471: INFO: Exec stderr: "" Oct 15 00:24:14.471: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:14.471: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:14.507801 7 log.go:181] (0xc0008efe40) (0xc0047e4320) Create stream I1015 00:24:14.507831 7 log.go:181] (0xc0008efe40) (0xc0047e4320) Stream added, broadcasting: 1 I1015 00:24:14.511227 7 log.go:181] (0xc0008efe40) Reply frame received for 1 I1015 00:24:14.511279 7 log.go:181] (0xc0008efe40) (0xc0000df4a0) Create stream I1015 00:24:14.511291 7 log.go:181] (0xc0008efe40) (0xc0000df4a0) Stream added, broadcasting: 3 I1015 00:24:14.512172 7 log.go:181] (0xc0008efe40) Reply frame received for 3 I1015 00:24:14.512231 7 log.go:181] (0xc0008efe40) (0xc001550f00) Create stream I1015 00:24:14.512254 7 log.go:181] (0xc0008efe40) (0xc001550f00) Stream added, broadcasting: 5 I1015 00:24:14.513280 7 log.go:181] (0xc0008efe40) Reply frame received for 5 I1015 00:24:14.585925 7 log.go:181] (0xc0008efe40) Data frame received for 5 I1015 00:24:14.585971 7 log.go:181] (0xc001550f00) (5) Data frame handling I1015 00:24:14.585998 7 log.go:181] (0xc0008efe40) Data frame received for 3 I1015 00:24:14.586013 7 log.go:181] (0xc0000df4a0) (3) Data frame handling I1015 00:24:14.586031 7 log.go:181] (0xc0000df4a0) (3) Data frame sent I1015 00:24:14.586045 7 log.go:181] (0xc0008efe40) Data frame received for 3 I1015 00:24:14.586058 7 log.go:181] (0xc0000df4a0) (3) Data frame handling I1015 00:24:14.587197 7 log.go:181] (0xc0008efe40) Data frame received for 1 I1015 00:24:14.587238 7 log.go:181] (0xc0047e4320) (1) Data frame handling I1015 00:24:14.587270 7 log.go:181] (0xc0047e4320) (1) Data frame sent I1015 00:24:14.587290 7 log.go:181] (0xc0008efe40) (0xc0047e4320) Stream removed, broadcasting: 1 I1015 00:24:14.587323 7 log.go:181] (0xc0008efe40) Go away received I1015 00:24:14.587462 7 log.go:181] (0xc0008efe40) (0xc0047e4320) Stream removed, broadcasting: 1 I1015 00:24:14.587490 7 log.go:181] (0xc0008efe40) (0xc0000df4a0) Stream removed, broadcasting: 3 I1015 00:24:14.587505 7 log.go:181] (0xc0008efe40) (0xc001550f00) Stream removed, broadcasting: 5 Oct 15 00:24:14.587: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 15 00:24:14.587: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:14.587: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:14.615164 7 log.go:181] (0xc000b34000) (0xc004030280) Create stream I1015 00:24:14.615202 7 log.go:181] (0xc000b34000) (0xc004030280) Stream added, broadcasting: 1 I1015 00:24:14.617552 7 log.go:181] (0xc000b34000) Reply frame received for 1 I1015 00:24:14.617586 7 log.go:181] (0xc000b34000) (0xc0000df540) Create stream I1015 00:24:14.617598 7 log.go:181] (0xc000b34000) (0xc0000df540) Stream added, broadcasting: 3 I1015 00:24:14.618474 7 log.go:181] (0xc000b34000) Reply frame received for 3 I1015 00:24:14.618509 7 log.go:181] (0xc000b34000) (0xc0047e43c0) Create stream I1015 00:24:14.618520 7 log.go:181] (0xc000b34000) (0xc0047e43c0) Stream added, broadcasting: 5 I1015 00:24:14.619254 7 log.go:181] (0xc000b34000) Reply frame received for 5 I1015 00:24:14.689020 7 log.go:181] (0xc000b34000) Data frame received for 3 I1015 00:24:14.689061 7 log.go:181] (0xc0000df540) (3) Data frame handling I1015 00:24:14.689078 7 log.go:181] (0xc0000df540) (3) Data frame sent I1015 00:24:14.689090 7 log.go:181] (0xc000b34000) Data frame received for 3 I1015 00:24:14.689105 7 log.go:181] (0xc0000df540) (3) Data frame handling I1015 00:24:14.689136 7 log.go:181] (0xc000b34000) Data frame received for 5 I1015 00:24:14.689178 7 log.go:181] (0xc0047e43c0) (5) Data frame handling I1015 00:24:14.690632 7 log.go:181] (0xc000b34000) Data frame received for 1 I1015 00:24:14.690651 7 log.go:181] (0xc004030280) (1) Data frame handling I1015 00:24:14.690663 7 log.go:181] (0xc004030280) (1) Data frame sent I1015 00:24:14.690676 7 log.go:181] (0xc000b34000) (0xc004030280) Stream removed, broadcasting: 1 I1015 00:24:14.690712 7 log.go:181] (0xc000b34000) Go away received I1015 00:24:14.690759 7 log.go:181] (0xc000b34000) (0xc004030280) Stream removed, broadcasting: 1 I1015 00:24:14.690778 7 log.go:181] (0xc000b34000) (0xc0000df540) Stream removed, broadcasting: 3 I1015 00:24:14.690788 7 log.go:181] (0xc000b34000) (0xc0047e43c0) Stream removed, broadcasting: 5 Oct 15 00:24:14.690: INFO: Exec stderr: "" Oct 15 00:24:14.690: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:14.690: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:14.722180 7 log.go:181] (0xc0002a4f20) (0xc004030460) Create stream I1015 00:24:14.722206 7 log.go:181] (0xc0002a4f20) (0xc004030460) Stream added, broadcasting: 1 I1015 00:24:14.724242 7 log.go:181] (0xc0002a4f20) Reply frame received for 1 I1015 00:24:14.724280 7 log.go:181] (0xc0002a4f20) (0xc001550fa0) Create stream I1015 00:24:14.724292 7 log.go:181] (0xc0002a4f20) (0xc001550fa0) Stream added, broadcasting: 3 I1015 00:24:14.725338 7 log.go:181] (0xc0002a4f20) Reply frame received for 3 I1015 00:24:14.725388 7 log.go:181] (0xc0002a4f20) (0xc001551040) Create stream I1015 00:24:14.725401 7 log.go:181] (0xc0002a4f20) (0xc001551040) Stream added, broadcasting: 5 I1015 00:24:14.726141 7 log.go:181] (0xc0002a4f20) Reply frame received for 5 I1015 00:24:14.774225 7 log.go:181] (0xc0002a4f20) Data frame received for 5 I1015 00:24:14.774257 7 log.go:181] (0xc001551040) (5) Data frame handling I1015 00:24:14.774282 7 log.go:181] (0xc0002a4f20) Data frame received for 3 I1015 00:24:14.774291 7 log.go:181] (0xc001550fa0) (3) Data frame handling I1015 00:24:14.774310 7 log.go:181] (0xc001550fa0) (3) Data frame sent I1015 00:24:14.774326 7 log.go:181] (0xc0002a4f20) Data frame received for 3 I1015 00:24:14.774334 7 log.go:181] (0xc001550fa0) (3) Data frame handling I1015 00:24:14.775398 7 log.go:181] (0xc0002a4f20) Data frame received for 1 I1015 00:24:14.775426 7 log.go:181] (0xc004030460) (1) Data frame handling I1015 00:24:14.775440 7 log.go:181] (0xc004030460) (1) Data frame sent I1015 00:24:14.775450 7 log.go:181] (0xc0002a4f20) (0xc004030460) Stream removed, broadcasting: 1 I1015 00:24:14.775460 7 log.go:181] (0xc0002a4f20) Go away received I1015 00:24:14.775586 7 log.go:181] (0xc0002a4f20) (0xc004030460) Stream removed, broadcasting: 1 I1015 00:24:14.775602 7 log.go:181] (0xc0002a4f20) (0xc001550fa0) Stream removed, broadcasting: 3 I1015 00:24:14.775612 7 log.go:181] (0xc0002a4f20) (0xc001551040) Stream removed, broadcasting: 5 Oct 15 00:24:14.775: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 15 00:24:14.775: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:14.775: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:14.817638 7 log.go:181] (0xc003436fd0) (0xc0015512c0) Create stream I1015 00:24:14.817670 7 log.go:181] (0xc003436fd0) (0xc0015512c0) Stream added, broadcasting: 1 I1015 00:24:14.819743 7 log.go:181] (0xc003436fd0) Reply frame received for 1 I1015 00:24:14.819784 7 log.go:181] (0xc003436fd0) (0xc0000df5e0) Create stream I1015 00:24:14.819795 7 log.go:181] (0xc003436fd0) (0xc0000df5e0) Stream added, broadcasting: 3 I1015 00:24:14.820588 7 log.go:181] (0xc003436fd0) Reply frame received for 3 I1015 00:24:14.820617 7 log.go:181] (0xc003436fd0) (0xc004030500) Create stream I1015 00:24:14.820626 7 log.go:181] (0xc003436fd0) (0xc004030500) Stream added, broadcasting: 5 I1015 00:24:14.821286 7 log.go:181] (0xc003436fd0) Reply frame received for 5 I1015 00:24:14.889165 7 log.go:181] (0xc003436fd0) Data frame received for 5 I1015 00:24:14.889204 7 log.go:181] (0xc004030500) (5) Data frame handling I1015 00:24:14.889226 7 log.go:181] (0xc003436fd0) Data frame received for 3 I1015 00:24:14.889235 7 log.go:181] (0xc0000df5e0) (3) Data frame handling I1015 00:24:14.889246 7 log.go:181] (0xc0000df5e0) (3) Data frame sent I1015 00:24:14.889260 7 log.go:181] (0xc003436fd0) Data frame received for 3 I1015 00:24:14.889276 7 log.go:181] (0xc0000df5e0) (3) Data frame handling I1015 00:24:14.891316 7 log.go:181] (0xc003436fd0) Data frame received for 1 I1015 00:24:14.891357 7 log.go:181] (0xc0015512c0) (1) Data frame handling I1015 00:24:14.891400 7 log.go:181] (0xc0015512c0) (1) Data frame sent I1015 00:24:14.891516 7 log.go:181] (0xc003436fd0) (0xc0015512c0) Stream removed, broadcasting: 1 I1015 00:24:14.891632 7 log.go:181] (0xc003436fd0) (0xc0015512c0) Stream removed, broadcasting: 1 I1015 00:24:14.891652 7 log.go:181] (0xc003436fd0) (0xc0000df5e0) Stream removed, broadcasting: 3 I1015 00:24:14.891753 7 log.go:181] (0xc003436fd0) Go away received I1015 00:24:14.891800 7 log.go:181] (0xc003436fd0) (0xc004030500) Stream removed, broadcasting: 5 Oct 15 00:24:14.891: INFO: Exec stderr: "" Oct 15 00:24:14.891: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:14.891: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:14.922543 7 log.go:181] (0xc0013de420) (0xc0000df900) Create stream I1015 00:24:14.922576 7 log.go:181] (0xc0013de420) (0xc0000df900) Stream added, broadcasting: 1 I1015 00:24:14.924547 7 log.go:181] (0xc0013de420) Reply frame received for 1 I1015 00:24:14.924596 7 log.go:181] (0xc0013de420) (0xc0047e4500) Create stream I1015 00:24:14.924609 7 log.go:181] (0xc0013de420) (0xc0047e4500) Stream added, broadcasting: 3 I1015 00:24:14.925630 7 log.go:181] (0xc0013de420) Reply frame received for 3 I1015 00:24:14.925660 7 log.go:181] (0xc0013de420) (0xc001551400) Create stream I1015 00:24:14.925669 7 log.go:181] (0xc0013de420) (0xc001551400) Stream added, broadcasting: 5 I1015 00:24:14.926593 7 log.go:181] (0xc0013de420) Reply frame received for 5 I1015 00:24:14.996624 7 log.go:181] (0xc0013de420) Data frame received for 5 I1015 00:24:14.996662 7 log.go:181] (0xc001551400) (5) Data frame handling I1015 00:24:14.996687 7 log.go:181] (0xc0013de420) Data frame received for 3 I1015 00:24:14.996700 7 log.go:181] (0xc0047e4500) (3) Data frame handling I1015 00:24:14.996709 7 log.go:181] (0xc0047e4500) (3) Data frame sent I1015 00:24:14.996716 7 log.go:181] (0xc0013de420) Data frame received for 3 I1015 00:24:14.996724 7 log.go:181] (0xc0047e4500) (3) Data frame handling I1015 00:24:14.998487 7 log.go:181] (0xc0013de420) Data frame received for 1 I1015 00:24:14.998527 7 log.go:181] (0xc0000df900) (1) Data frame handling I1015 00:24:14.998559 7 log.go:181] (0xc0000df900) (1) Data frame sent I1015 00:24:14.998705 7 log.go:181] (0xc0013de420) (0xc0000df900) Stream removed, broadcasting: 1 I1015 00:24:14.998762 7 log.go:181] (0xc0013de420) Go away received I1015 00:24:14.998888 7 log.go:181] (0xc0013de420) (0xc0000df900) Stream removed, broadcasting: 1 I1015 00:24:14.998931 7 log.go:181] (0xc0013de420) (0xc0047e4500) Stream removed, broadcasting: 3 I1015 00:24:14.998968 7 log.go:181] (0xc0013de420) (0xc001551400) Stream removed, broadcasting: 5 Oct 15 00:24:14.998: INFO: Exec stderr: "" Oct 15 00:24:14.999: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:14.999: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:15.030192 7 log.go:181] (0xc0002a5d90) (0xc0040308c0) Create stream I1015 00:24:15.030223 7 log.go:181] (0xc0002a5d90) (0xc0040308c0) Stream added, broadcasting: 1 I1015 00:24:15.032825 7 log.go:181] (0xc0002a5d90) Reply frame received for 1 I1015 00:24:15.033013 7 log.go:181] (0xc0002a5d90) (0xc002439360) Create stream I1015 00:24:15.033029 7 log.go:181] (0xc0002a5d90) (0xc002439360) Stream added, broadcasting: 3 I1015 00:24:15.034306 7 log.go:181] (0xc0002a5d90) Reply frame received for 3 I1015 00:24:15.034349 7 log.go:181] (0xc0002a5d90) (0xc002439400) Create stream I1015 00:24:15.034358 7 log.go:181] (0xc0002a5d90) (0xc002439400) Stream added, broadcasting: 5 I1015 00:24:15.035140 7 log.go:181] (0xc0002a5d90) Reply frame received for 5 I1015 00:24:15.098175 7 log.go:181] (0xc0002a5d90) Data frame received for 3 I1015 00:24:15.098216 7 log.go:181] (0xc002439360) (3) Data frame handling I1015 00:24:15.098235 7 log.go:181] (0xc002439360) (3) Data frame sent I1015 00:24:15.098268 7 log.go:181] (0xc0002a5d90) Data frame received for 5 I1015 00:24:15.098371 7 log.go:181] (0xc002439400) (5) Data frame handling I1015 00:24:15.098417 7 log.go:181] (0xc0002a5d90) Data frame received for 3 I1015 00:24:15.098437 7 log.go:181] (0xc002439360) (3) Data frame handling I1015 00:24:15.100076 7 log.go:181] (0xc0002a5d90) Data frame received for 1 I1015 00:24:15.100087 7 log.go:181] (0xc0040308c0) (1) Data frame handling I1015 00:24:15.100094 7 log.go:181] (0xc0040308c0) (1) Data frame sent I1015 00:24:15.100102 7 log.go:181] (0xc0002a5d90) (0xc0040308c0) Stream removed, broadcasting: 1 I1015 00:24:15.100160 7 log.go:181] (0xc0002a5d90) (0xc0040308c0) Stream removed, broadcasting: 1 I1015 00:24:15.100170 7 log.go:181] (0xc0002a5d90) (0xc002439360) Stream removed, broadcasting: 3 I1015 00:24:15.100178 7 log.go:181] (0xc0002a5d90) (0xc002439400) Stream removed, broadcasting: 5 Oct 15 00:24:15.100: INFO: Exec stderr: "" Oct 15 00:24:15.100: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8460 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:24:15.100: INFO: >>> kubeConfig: /root/.kube/config I1015 00:24:15.100281 7 log.go:181] (0xc0002a5d90) Go away received I1015 00:24:15.132931 7 log.go:181] (0xc003946a50) (0xc0024397c0) Create stream I1015 00:24:15.132992 7 log.go:181] (0xc003946a50) (0xc0024397c0) Stream added, broadcasting: 1 I1015 00:24:15.136609 7 log.go:181] (0xc003946a50) Reply frame received for 1 I1015 00:24:15.136659 7 log.go:181] (0xc003946a50) (0xc0047e4640) Create stream I1015 00:24:15.136680 7 log.go:181] (0xc003946a50) (0xc0047e4640) Stream added, broadcasting: 3 I1015 00:24:15.138674 7 log.go:181] (0xc003946a50) Reply frame received for 3 I1015 00:24:15.138725 7 log.go:181] (0xc003946a50) (0xc0015515e0) Create stream I1015 00:24:15.138740 7 log.go:181] (0xc003946a50) (0xc0015515e0) Stream added, broadcasting: 5 I1015 00:24:15.139843 7 log.go:181] (0xc003946a50) Reply frame received for 5 I1015 00:24:15.208417 7 log.go:181] (0xc003946a50) Data frame received for 5 I1015 00:24:15.208449 7 log.go:181] (0xc0015515e0) (5) Data frame handling I1015 00:24:15.208486 7 log.go:181] (0xc003946a50) Data frame received for 3 I1015 00:24:15.208500 7 log.go:181] (0xc0047e4640) (3) Data frame handling I1015 00:24:15.208516 7 log.go:181] (0xc0047e4640) (3) Data frame sent I1015 00:24:15.208529 7 log.go:181] (0xc003946a50) Data frame received for 3 I1015 00:24:15.208541 7 log.go:181] (0xc0047e4640) (3) Data frame handling I1015 00:24:15.210408 7 log.go:181] (0xc003946a50) Data frame received for 1 I1015 00:24:15.210444 7 log.go:181] (0xc0024397c0) (1) Data frame handling I1015 00:24:15.210467 7 log.go:181] (0xc0024397c0) (1) Data frame sent I1015 00:24:15.210487 7 log.go:181] (0xc003946a50) (0xc0024397c0) Stream removed, broadcasting: 1 I1015 00:24:15.210504 7 log.go:181] (0xc003946a50) Go away received I1015 00:24:15.210657 7 log.go:181] (0xc003946a50) (0xc0024397c0) Stream removed, broadcasting: 1 I1015 00:24:15.210693 7 log.go:181] (0xc003946a50) (0xc0047e4640) Stream removed, broadcasting: 3 I1015 00:24:15.210716 7 log.go:181] (0xc003946a50) (0xc0015515e0) Stream removed, broadcasting: 5 Oct 15 00:24:15.210: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:24:15.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8460" for this suite. • [SLOW TEST:11.275 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":264,"skipped":4263,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:24:15.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 15 00:24:15.265: INFO: >>> kubeConfig: /root/.kube/config Oct 15 00:24:18.234: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:24:28.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9394" for this suite. • [SLOW TEST:13.428 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":265,"skipped":4275,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:24:28.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-935003f1-df84-454e-9458-4bcac814ba21 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:24:32.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2969" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":266,"skipped":4287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:24:32.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 15 00:24:33.107: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:24:41.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4167" for this suite. • [SLOW TEST:8.564 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4323,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:24:41.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:24:41.599: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:24:45.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9654" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":268,"skipped":4330,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:24:45.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-f762a6e3-0cbb-4a71-a018-dbad366a1a8c STEP: Creating a pod to test consume configMaps Oct 15 00:24:45.853: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79" in namespace "projected-6270" to be "Succeeded or Failed" Oct 15 00:24:45.868: INFO: Pod "pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79": Phase="Pending", Reason="", readiness=false. Elapsed: 14.848323ms Oct 15 00:24:47.877: INFO: Pod "pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024120107s Oct 15 00:24:49.882: INFO: Pod "pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79": Phase="Running", Reason="", readiness=true. Elapsed: 4.028598086s Oct 15 00:24:51.885: INFO: Pod "pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032204231s STEP: Saw pod success Oct 15 00:24:51.885: INFO: Pod "pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79" satisfied condition "Succeeded or Failed" Oct 15 00:24:51.913: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79 container projected-configmap-volume-test: STEP: delete the pod Oct 15 00:24:51.947: INFO: Waiting for pod pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79 to disappear Oct 15 00:24:51.974: INFO: Pod pod-projected-configmaps-20eb66fb-ed19-4d53-8d3e-c95d64cbad79 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:24:51.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6270" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":269,"skipped":4332,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:24:51.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 15 00:24:52.418: INFO: Waiting up to 5m0s for pod "pod-39caf0fb-6bd4-4e72-a942-f7494aa00fec" in namespace "emptydir-8011" to be "Succeeded or Failed" Oct 15 00:24:52.421: INFO: Pod "pod-39caf0fb-6bd4-4e72-a942-f7494aa00fec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.021008ms Oct 15 00:24:54.425: INFO: Pod "pod-39caf0fb-6bd4-4e72-a942-f7494aa00fec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007510125s Oct 15 00:24:56.430: INFO: Pod "pod-39caf0fb-6bd4-4e72-a942-f7494aa00fec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012181318s STEP: Saw pod success Oct 15 00:24:56.430: INFO: Pod "pod-39caf0fb-6bd4-4e72-a942-f7494aa00fec" satisfied condition "Succeeded or Failed" Oct 15 00:24:56.432: INFO: Trying to get logs from node leguer-worker pod pod-39caf0fb-6bd4-4e72-a942-f7494aa00fec container test-container: STEP: delete the pod Oct 15 00:24:56.460: INFO: Waiting for pod pod-39caf0fb-6bd4-4e72-a942-f7494aa00fec to disappear Oct 15 00:24:56.482: INFO: Pod pod-39caf0fb-6bd4-4e72-a942-f7494aa00fec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:24:56.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8011" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":270,"skipped":4341,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:24:56.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:25:11.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6922" for this suite. STEP: Destroying namespace "nsdeletetest-1063" for this suite. Oct 15 00:25:11.842: INFO: Namespace nsdeletetest-1063 was already deleted STEP: Destroying namespace "nsdeletetest-2497" for this suite. • [SLOW TEST:15.356 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":271,"skipped":4348,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:25:11.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 15 00:25:11.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c22de28d-c920-4774-8abf-45fcc58837bf" in namespace "downward-api-5684" to be "Succeeded or Failed" Oct 15 00:25:11.954: INFO: Pod "downwardapi-volume-c22de28d-c920-4774-8abf-45fcc58837bf": Phase="Pending", Reason="", readiness=false. Elapsed: 13.687821ms Oct 15 00:25:13.959: INFO: Pod "downwardapi-volume-c22de28d-c920-4774-8abf-45fcc58837bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018854322s Oct 15 00:25:15.963: INFO: Pod "downwardapi-volume-c22de28d-c920-4774-8abf-45fcc58837bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023419171s STEP: Saw pod success Oct 15 00:25:15.963: INFO: Pod "downwardapi-volume-c22de28d-c920-4774-8abf-45fcc58837bf" satisfied condition "Succeeded or Failed" Oct 15 00:25:15.966: INFO: Trying to get logs from node leguer-worker pod downwardapi-volume-c22de28d-c920-4774-8abf-45fcc58837bf container client-container: STEP: delete the pod Oct 15 00:25:16.001: INFO: Waiting for pod downwardapi-volume-c22de28d-c920-4774-8abf-45fcc58837bf to disappear Oct 15 00:25:16.007: INFO: Pod downwardapi-volume-c22de28d-c920-4774-8abf-45fcc58837bf no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:25:16.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5684" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":272,"skipped":4381,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:25:16.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 15 00:25:16.136: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:16.140: INFO: Number of nodes with available pods: 0 Oct 15 00:25:16.140: INFO: Node leguer-worker is running more than one daemon pod Oct 15 00:25:17.146: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:17.149: INFO: Number of nodes with available pods: 0 Oct 15 00:25:17.149: INFO: Node leguer-worker is running more than one daemon pod Oct 15 00:25:18.147: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:18.151: INFO: Number of nodes with available pods: 0 Oct 15 00:25:18.151: INFO: Node leguer-worker is running more than one daemon pod Oct 15 00:25:19.238: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:19.242: INFO: Number of nodes with available pods: 0 Oct 15 00:25:19.242: INFO: Node leguer-worker is running more than one daemon pod Oct 15 00:25:20.146: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:20.149: INFO: Number of nodes with available pods: 1 Oct 15 00:25:20.149: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:21.166: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:21.173: INFO: Number of nodes with available pods: 2 Oct 15 00:25:21.173: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 15 00:25:21.199: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:21.202: INFO: Number of nodes with available pods: 1 Oct 15 00:25:21.202: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:22.208: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:22.210: INFO: Number of nodes with available pods: 1 Oct 15 00:25:22.210: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:23.208: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:23.213: INFO: Number of nodes with available pods: 1 Oct 15 00:25:23.213: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:24.209: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:24.212: INFO: Number of nodes with available pods: 1 Oct 15 00:25:24.212: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:25.209: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:25.232: INFO: Number of nodes with available pods: 1 Oct 15 00:25:25.232: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:26.209: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:26.213: INFO: Number of nodes with available pods: 1 Oct 15 00:25:26.213: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:27.226: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:27.237: INFO: Number of nodes with available pods: 1 Oct 15 00:25:27.237: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:28.208: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:28.212: INFO: Number of nodes with available pods: 1 Oct 15 00:25:28.213: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:29.207: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:29.211: INFO: Number of nodes with available pods: 1 Oct 15 00:25:29.211: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:30.209: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:30.212: INFO: Number of nodes with available pods: 1 Oct 15 00:25:30.212: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:31.209: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:31.212: INFO: Number of nodes with available pods: 1 Oct 15 00:25:31.212: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:32.209: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:32.213: INFO: Number of nodes with available pods: 1 Oct 15 00:25:32.213: INFO: Node leguer-worker2 is running more than one daemon pod Oct 15 00:25:33.209: INFO: DaemonSet pods can't tolerate node leguer-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 15 00:25:33.213: INFO: Number of nodes with available pods: 2 Oct 15 00:25:33.213: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6507, will wait for the garbage collector to delete the pods Oct 15 00:25:33.280: INFO: Deleting DaemonSet.extensions daemon-set took: 11.414261ms Oct 15 00:25:33.780: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.207009ms Oct 15 00:25:40.286: INFO: Number of nodes with available pods: 0 Oct 15 00:25:40.286: INFO: Number of running nodes: 0, number of available pods: 0 Oct 15 00:25:40.289: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6507/daemonsets","resourceVersion":"2971463"},"items":null} Oct 15 00:25:40.291: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6507/pods","resourceVersion":"2971463"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:25:40.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6507" for this suite. • [SLOW TEST:24.384 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":273,"skipped":4415,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:25:40.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 15 00:25:40.478: INFO: PodSpec: initContainers in spec.initContainers Oct 15 00:26:31.232: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-360a19c7-b628-4614-9a63-e513242ce54c", GenerateName:"", Namespace:"init-container-3836", SelfLink:"/api/v1/namespaces/init-container-3836/pods/pod-init-360a19c7-b628-4614-9a63-e513242ce54c", UID:"61f1ba98-78a8-42f7-98a2-7a94744439fc", ResourceVersion:"2971663", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63738318340, loc:(*time.Location)(0x7701840)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"478473623"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000440ae0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000441020)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc000441140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000441160)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-jhhbd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002670000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jhhbd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jhhbd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-jhhbd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004b320a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"leguer-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f46000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004b32130)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004b32150)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004b32158), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004b3215c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0041d8020), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318340, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318340, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318340, loc:(*time.Location)(0x7701840)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318340, loc:(*time.Location)(0x7701840)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.18", PodIP:"10.244.2.56", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.56"}}, StartTime:(*v1.Time)(0xc000441220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002f460e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002f461c0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://91168eff4181aef2690d2c4109cd9440bb730b050b91a6404024e76b80ce3cff", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc000441460), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0004413c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc004b321df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:26:31.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3836" for this suite. • [SLOW TEST:50.899 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":274,"skipped":4424,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:26:31.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-8fdeb38e-e6a3-4852-943e-daab3da81269 STEP: Creating a pod to test consume configMaps Oct 15 00:26:31.371: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-875dbb21-8540-4a2c-90a8-7e7f73319d61" in namespace "projected-27" to be "Succeeded or Failed" Oct 15 00:26:31.388: INFO: Pod "pod-projected-configmaps-875dbb21-8540-4a2c-90a8-7e7f73319d61": Phase="Pending", Reason="", readiness=false. Elapsed: 16.796102ms Oct 15 00:26:33.392: INFO: Pod "pod-projected-configmaps-875dbb21-8540-4a2c-90a8-7e7f73319d61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021255914s Oct 15 00:26:35.397: INFO: Pod "pod-projected-configmaps-875dbb21-8540-4a2c-90a8-7e7f73319d61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026150468s STEP: Saw pod success Oct 15 00:26:35.397: INFO: Pod "pod-projected-configmaps-875dbb21-8540-4a2c-90a8-7e7f73319d61" satisfied condition "Succeeded or Failed" Oct 15 00:26:35.401: INFO: Trying to get logs from node leguer-worker pod pod-projected-configmaps-875dbb21-8540-4a2c-90a8-7e7f73319d61 container projected-configmap-volume-test: STEP: delete the pod Oct 15 00:26:35.576: INFO: Waiting for pod pod-projected-configmaps-875dbb21-8540-4a2c-90a8-7e7f73319d61 to disappear Oct 15 00:26:35.621: INFO: Pod pod-projected-configmaps-875dbb21-8540-4a2c-90a8-7e7f73319d61 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:26:35.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-27" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":275,"skipped":4434,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:26:35.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Oct 15 00:26:35.803: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config api-versions' Oct 15 00:26:36.033: INFO: stderr: "" Oct 15 00:26:36.033: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:26:36.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1286" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":276,"skipped":4441,"failed":0} S ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:26:36.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:26:40.221: INFO: Waiting up to 5m0s for pod "client-envvars-a1461b96-d5b4-4e6d-a5ba-8cbeb19cf810" in namespace "pods-2303" to be "Succeeded or Failed" Oct 15 00:26:40.242: INFO: Pod "client-envvars-a1461b96-d5b4-4e6d-a5ba-8cbeb19cf810": Phase="Pending", Reason="", readiness=false. Elapsed: 21.258812ms Oct 15 00:26:42.248: INFO: Pod "client-envvars-a1461b96-d5b4-4e6d-a5ba-8cbeb19cf810": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026341735s Oct 15 00:26:44.269: INFO: Pod "client-envvars-a1461b96-d5b4-4e6d-a5ba-8cbeb19cf810": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047309498s STEP: Saw pod success Oct 15 00:26:44.269: INFO: Pod "client-envvars-a1461b96-d5b4-4e6d-a5ba-8cbeb19cf810" satisfied condition "Succeeded or Failed" Oct 15 00:26:44.272: INFO: Trying to get logs from node leguer-worker2 pod client-envvars-a1461b96-d5b4-4e6d-a5ba-8cbeb19cf810 container env3cont: STEP: delete the pod Oct 15 00:26:44.295: INFO: Waiting for pod client-envvars-a1461b96-d5b4-4e6d-a5ba-8cbeb19cf810 to disappear Oct 15 00:26:44.300: INFO: Pod client-envvars-a1461b96-d5b4-4e6d-a5ba-8cbeb19cf810 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:26:44.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2303" for this suite. • [SLOW TEST:8.252 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":277,"skipped":4442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:26:44.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-f2ceff40-c53e-45a4-af63-aaeabd75c67f STEP: Creating a pod to test consume secrets Oct 15 00:26:44.401: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c58bf088-6889-4723-a9d4-8360b4576c8a" in namespace "projected-9067" to be "Succeeded or Failed" Oct 15 00:26:44.414: INFO: Pod "pod-projected-secrets-c58bf088-6889-4723-a9d4-8360b4576c8a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.736826ms Oct 15 00:26:46.418: INFO: Pod "pod-projected-secrets-c58bf088-6889-4723-a9d4-8360b4576c8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017309057s Oct 15 00:26:48.422: INFO: Pod "pod-projected-secrets-c58bf088-6889-4723-a9d4-8360b4576c8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021226255s STEP: Saw pod success Oct 15 00:26:48.422: INFO: Pod "pod-projected-secrets-c58bf088-6889-4723-a9d4-8360b4576c8a" satisfied condition "Succeeded or Failed" Oct 15 00:26:48.425: INFO: Trying to get logs from node leguer-worker pod pod-projected-secrets-c58bf088-6889-4723-a9d4-8360b4576c8a container projected-secret-volume-test: STEP: delete the pod Oct 15 00:26:48.454: INFO: Waiting for pod pod-projected-secrets-c58bf088-6889-4723-a9d4-8360b4576c8a to disappear Oct 15 00:26:48.478: INFO: Pod pod-projected-secrets-c58bf088-6889-4723-a9d4-8360b4576c8a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:26:48.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9067" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":278,"skipped":4474,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:26:48.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:26:48.806: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:26:49.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5010" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":279,"skipped":4493,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:26:49.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-3356 STEP: creating service affinity-nodeport-transition in namespace services-3356 STEP: creating replication controller affinity-nodeport-transition in namespace services-3356 I1015 00:26:49.995570 7 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-3356, replica count: 3 I1015 00:26:53.046037 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1015 00:26:56.046310 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1015 00:26:59.046591 7 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 15 00:26:59.057: INFO: Creating new exec pod Oct 15 00:27:04.086: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3356 execpod-affinitybz4z2 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Oct 15 00:27:04.305: INFO: stderr: "I1015 00:27:04.215250 3628 log.go:181] (0xc0002be0b0) (0xc000b9fae0) Create stream\nI1015 00:27:04.215318 3628 log.go:181] (0xc0002be0b0) (0xc000b9fae0) Stream added, broadcasting: 1\nI1015 00:27:04.217133 3628 log.go:181] (0xc0002be0b0) Reply frame received for 1\nI1015 00:27:04.217204 3628 log.go:181] (0xc0002be0b0) (0xc0000d23c0) Create stream\nI1015 00:27:04.217225 3628 log.go:181] (0xc0002be0b0) (0xc0000d23c0) Stream added, broadcasting: 3\nI1015 00:27:04.218065 3628 log.go:181] (0xc0002be0b0) Reply frame received for 3\nI1015 00:27:04.218119 3628 log.go:181] (0xc0002be0b0) (0xc0004580a0) Create stream\nI1015 00:27:04.218134 3628 log.go:181] (0xc0002be0b0) (0xc0004580a0) Stream added, broadcasting: 5\nI1015 00:27:04.218844 3628 log.go:181] (0xc0002be0b0) Reply frame received for 5\nI1015 00:27:04.295288 3628 log.go:181] (0xc0002be0b0) Data frame received for 5\nI1015 00:27:04.295320 3628 log.go:181] (0xc0004580a0) (5) Data frame handling\nI1015 00:27:04.295341 3628 log.go:181] (0xc0004580a0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI1015 00:27:04.296103 3628 log.go:181] (0xc0002be0b0) Data frame received for 5\nI1015 00:27:04.296131 3628 log.go:181] (0xc0004580a0) (5) Data frame handling\nI1015 00:27:04.296143 3628 log.go:181] (0xc0004580a0) (5) Data frame sent\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI1015 00:27:04.296277 3628 log.go:181] (0xc0002be0b0) Data frame received for 3\nI1015 00:27:04.296311 3628 log.go:181] (0xc0000d23c0) (3) Data frame handling\nI1015 00:27:04.296474 3628 log.go:181] (0xc0002be0b0) Data frame received for 5\nI1015 00:27:04.296533 3628 log.go:181] (0xc0004580a0) (5) Data frame handling\nI1015 00:27:04.298440 3628 log.go:181] (0xc0002be0b0) Data frame received for 1\nI1015 00:27:04.298461 3628 log.go:181] (0xc000b9fae0) (1) Data frame handling\nI1015 00:27:04.298483 3628 log.go:181] (0xc000b9fae0) (1) Data frame sent\nI1015 00:27:04.298502 3628 log.go:181] (0xc0002be0b0) (0xc000b9fae0) Stream removed, broadcasting: 1\nI1015 00:27:04.298567 3628 log.go:181] (0xc0002be0b0) Go away received\nI1015 00:27:04.298939 3628 log.go:181] (0xc0002be0b0) (0xc000b9fae0) Stream removed, broadcasting: 1\nI1015 00:27:04.298960 3628 log.go:181] (0xc0002be0b0) (0xc0000d23c0) Stream removed, broadcasting: 3\nI1015 00:27:04.298970 3628 log.go:181] (0xc0002be0b0) (0xc0004580a0) Stream removed, broadcasting: 5\n" Oct 15 00:27:04.305: INFO: stdout: "" Oct 15 00:27:04.306: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3356 execpod-affinitybz4z2 -- /bin/sh -x -c nc -zv -t -w 2 10.99.113.248 80' Oct 15 00:27:04.518: INFO: stderr: "I1015 00:27:04.444002 3645 log.go:181] (0xc0006ec000) (0xc000a241e0) Create stream\nI1015 00:27:04.444089 3645 log.go:181] (0xc0006ec000) (0xc000a241e0) Stream added, broadcasting: 1\nI1015 00:27:04.446678 3645 log.go:181] (0xc0006ec000) Reply frame received for 1\nI1015 00:27:04.446724 3645 log.go:181] (0xc0006ec000) (0xc000f02000) Create stream\nI1015 00:27:04.446741 3645 log.go:181] (0xc0006ec000) (0xc000f02000) Stream added, broadcasting: 3\nI1015 00:27:04.447950 3645 log.go:181] (0xc0006ec000) Reply frame received for 3\nI1015 00:27:04.447984 3645 log.go:181] (0xc0006ec000) (0xc000f020a0) Create stream\nI1015 00:27:04.447994 3645 log.go:181] (0xc0006ec000) (0xc000f020a0) Stream added, broadcasting: 5\nI1015 00:27:04.448961 3645 log.go:181] (0xc0006ec000) Reply frame received for 5\nI1015 00:27:04.510208 3645 log.go:181] (0xc0006ec000) Data frame received for 5\nI1015 00:27:04.510251 3645 log.go:181] (0xc0006ec000) Data frame received for 3\nI1015 00:27:04.510297 3645 log.go:181] (0xc000f02000) (3) Data frame handling\nI1015 00:27:04.510329 3645 log.go:181] (0xc000f020a0) (5) Data frame handling\nI1015 00:27:04.510344 3645 log.go:181] (0xc000f020a0) (5) Data frame sent\nI1015 00:27:04.510355 3645 log.go:181] (0xc0006ec000) Data frame received for 5\nI1015 00:27:04.510365 3645 log.go:181] (0xc000f020a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.113.248 80\nConnection to 10.99.113.248 80 port [tcp/http] succeeded!\nI1015 00:27:04.512101 3645 log.go:181] (0xc0006ec000) Data frame received for 1\nI1015 00:27:04.512130 3645 log.go:181] (0xc000a241e0) (1) Data frame handling\nI1015 00:27:04.512167 3645 log.go:181] (0xc000a241e0) (1) Data frame sent\nI1015 00:27:04.512195 3645 log.go:181] (0xc0006ec000) (0xc000a241e0) Stream removed, broadcasting: 1\nI1015 00:27:04.512280 3645 log.go:181] (0xc0006ec000) Go away received\nI1015 00:27:04.512640 3645 log.go:181] (0xc0006ec000) (0xc000a241e0) Stream removed, broadcasting: 1\nI1015 00:27:04.512660 3645 log.go:181] (0xc0006ec000) (0xc000f02000) Stream removed, broadcasting: 3\nI1015 00:27:04.512672 3645 log.go:181] (0xc0006ec000) (0xc000f020a0) Stream removed, broadcasting: 5\n" Oct 15 00:27:04.518: INFO: stdout: "" Oct 15 00:27:04.518: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3356 execpod-affinitybz4z2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.18 30000' Oct 15 00:27:04.743: INFO: stderr: "I1015 00:27:04.651102 3664 log.go:181] (0xc000549c30) (0xc000540c80) Create stream\nI1015 00:27:04.651153 3664 log.go:181] (0xc000549c30) (0xc000540c80) Stream added, broadcasting: 1\nI1015 00:27:04.657895 3664 log.go:181] (0xc000549c30) Reply frame received for 1\nI1015 00:27:04.657947 3664 log.go:181] (0xc000549c30) (0xc00083a000) Create stream\nI1015 00:27:04.657965 3664 log.go:181] (0xc000549c30) (0xc00083a000) Stream added, broadcasting: 3\nI1015 00:27:04.658919 3664 log.go:181] (0xc000549c30) Reply frame received for 3\nI1015 00:27:04.658967 3664 log.go:181] (0xc000549c30) (0xc000540000) Create stream\nI1015 00:27:04.658991 3664 log.go:181] (0xc000549c30) (0xc000540000) Stream added, broadcasting: 5\nI1015 00:27:04.659883 3664 log.go:181] (0xc000549c30) Reply frame received for 5\nI1015 00:27:04.735566 3664 log.go:181] (0xc000549c30) Data frame received for 3\nI1015 00:27:04.735600 3664 log.go:181] (0xc00083a000) (3) Data frame handling\nI1015 00:27:04.735625 3664 log.go:181] (0xc000549c30) Data frame received for 5\nI1015 00:27:04.735636 3664 log.go:181] (0xc000540000) (5) Data frame handling\nI1015 00:27:04.735649 3664 log.go:181] (0xc000540000) (5) Data frame sent\nI1015 00:27:04.735658 3664 log.go:181] (0xc000549c30) Data frame received for 5\nI1015 00:27:04.735667 3664 log.go:181] (0xc000540000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.18 30000\nConnection to 172.18.0.18 30000 port [tcp/30000] succeeded!\nI1015 00:27:04.737223 3664 log.go:181] (0xc000549c30) Data frame received for 1\nI1015 00:27:04.737245 3664 log.go:181] (0xc000540c80) (1) Data frame handling\nI1015 00:27:04.737261 3664 log.go:181] (0xc000540c80) (1) Data frame sent\nI1015 00:27:04.737283 3664 log.go:181] (0xc000549c30) (0xc000540c80) Stream removed, broadcasting: 1\nI1015 00:27:04.737305 3664 log.go:181] (0xc000549c30) Go away received\nI1015 00:27:04.737778 3664 log.go:181] (0xc000549c30) (0xc000540c80) Stream removed, broadcasting: 1\nI1015 00:27:04.737808 3664 log.go:181] (0xc000549c30) (0xc00083a000) Stream removed, broadcasting: 3\nI1015 00:27:04.737824 3664 log.go:181] (0xc000549c30) (0xc000540000) Stream removed, broadcasting: 5\n" Oct 15 00:27:04.743: INFO: stdout: "" Oct 15 00:27:04.743: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3356 execpod-affinitybz4z2 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.17 30000' Oct 15 00:27:04.964: INFO: stderr: "I1015 00:27:04.884263 3682 log.go:181] (0xc000168d10) (0xc000160460) Create stream\nI1015 00:27:04.884393 3682 log.go:181] (0xc000168d10) (0xc000160460) Stream added, broadcasting: 1\nI1015 00:27:04.887388 3682 log.go:181] (0xc000168d10) Reply frame received for 1\nI1015 00:27:04.887440 3682 log.go:181] (0xc000168d10) (0xc000b2e000) Create stream\nI1015 00:27:04.887456 3682 log.go:181] (0xc000168d10) (0xc000b2e000) Stream added, broadcasting: 3\nI1015 00:27:04.888405 3682 log.go:181] (0xc000168d10) Reply frame received for 3\nI1015 00:27:04.888441 3682 log.go:181] (0xc000168d10) (0xc000b2e0a0) Create stream\nI1015 00:27:04.888451 3682 log.go:181] (0xc000168d10) (0xc000b2e0a0) Stream added, broadcasting: 5\nI1015 00:27:04.889534 3682 log.go:181] (0xc000168d10) Reply frame received for 5\nI1015 00:27:04.956556 3682 log.go:181] (0xc000168d10) Data frame received for 5\nI1015 00:27:04.956608 3682 log.go:181] (0xc000b2e0a0) (5) Data frame handling\nI1015 00:27:04.956634 3682 log.go:181] (0xc000b2e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.17 30000\nConnection to 172.18.0.17 30000 port [tcp/30000] succeeded!\nI1015 00:27:04.956656 3682 log.go:181] (0xc000168d10) Data frame received for 5\nI1015 00:27:04.956697 3682 log.go:181] (0xc000b2e0a0) (5) Data frame handling\nI1015 00:27:04.956733 3682 log.go:181] (0xc000168d10) Data frame received for 3\nI1015 00:27:04.956949 3682 log.go:181] (0xc000b2e000) (3) Data frame handling\nI1015 00:27:04.958364 3682 log.go:181] (0xc000168d10) Data frame received for 1\nI1015 00:27:04.958384 3682 log.go:181] (0xc000160460) (1) Data frame handling\nI1015 00:27:04.958394 3682 log.go:181] (0xc000160460) (1) Data frame sent\nI1015 00:27:04.958409 3682 log.go:181] (0xc000168d10) (0xc000160460) Stream removed, broadcasting: 1\nI1015 00:27:04.958423 3682 log.go:181] (0xc000168d10) Go away received\nI1015 00:27:04.959032 3682 log.go:181] (0xc000168d10) (0xc000160460) Stream removed, broadcasting: 1\nI1015 00:27:04.959055 3682 log.go:181] (0xc000168d10) (0xc000b2e000) Stream removed, broadcasting: 3\nI1015 00:27:04.959066 3682 log.go:181] (0xc000168d10) (0xc000b2e0a0) Stream removed, broadcasting: 5\n" Oct 15 00:27:04.964: INFO: stdout: "" Oct 15 00:27:04.971: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3356 execpod-affinitybz4z2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.18:30000/ ; done' Oct 15 00:27:05.287: INFO: stderr: "I1015 00:27:05.116036 3700 log.go:181] (0xc000e9ef20) (0xc0005df9a0) Create stream\nI1015 00:27:05.116125 3700 log.go:181] (0xc000e9ef20) (0xc0005df9a0) Stream added, broadcasting: 1\nI1015 00:27:05.121464 3700 log.go:181] (0xc000e9ef20) Reply frame received for 1\nI1015 00:27:05.121496 3700 log.go:181] (0xc000e9ef20) (0xc0005de6e0) Create stream\nI1015 00:27:05.121525 3700 log.go:181] (0xc000e9ef20) (0xc0005de6e0) Stream added, broadcasting: 3\nI1015 00:27:05.122264 3700 log.go:181] (0xc000e9ef20) Reply frame received for 3\nI1015 00:27:05.122291 3700 log.go:181] (0xc000e9ef20) (0xc0004eadc0) Create stream\nI1015 00:27:05.122302 3700 log.go:181] (0xc000e9ef20) (0xc0004eadc0) Stream added, broadcasting: 5\nI1015 00:27:05.123143 3700 log.go:181] (0xc000e9ef20) Reply frame received for 5\nI1015 00:27:05.177135 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.177155 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.177163 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.177260 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.177284 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.177310 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.183375 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.183392 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.183404 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.184295 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.184324 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.184341 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.184692 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.184707 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.184718 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.191500 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.191518 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.191534 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.192255 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.192277 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.192286 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.192298 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.192305 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.192311 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.199833 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.199852 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.199875 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.200721 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.200748 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.200775 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.200794 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.200812 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.200821 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.204052 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.204082 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.204112 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.204344 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.204374 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.204391 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ I1015 00:27:05.204458 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.204473 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.204486 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\ncurl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.204510 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.204531 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.204548 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.211279 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.211307 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.211328 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.211345 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.211360 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.211384 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.211402 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.211416 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.211462 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.215372 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.215394 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.215423 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.215828 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.215925 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.215957 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.216014 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.216047 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.216067 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.220777 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.220808 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.220819 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.221841 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.221861 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.221871 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.221900 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.221921 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.221933 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.226238 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.226261 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.226278 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.226880 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.226904 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.226915 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.226932 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.226942 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.226952 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.234378 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.234392 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.234399 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.234789 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.234803 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.234812 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.235113 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.235128 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.235143 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.242328 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.242352 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.242371 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.243246 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.243258 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.243269 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.243278 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.243284 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.243289 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.249428 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.249451 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.249461 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.250266 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.250300 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.250314 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.250340 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.250364 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.250387 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.254815 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.254841 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.254883 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.255704 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.255722 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.255731 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.255768 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.255793 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.255823 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.262215 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.262246 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.262265 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.263025 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.263044 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.263054 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.263263 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.263291 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.263313 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.267756 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.267776 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.267787 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.268168 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.268189 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.268207 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.268232 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.268251 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.268264 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.272347 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.272375 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.272403 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.273518 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.273553 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.273583 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.273624 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.273636 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.273673 3700 log.go:181] (0xc0004eadc0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.278153 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.278181 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.278210 3700 log.go:181] (0xc0005de6e0) (3) Data frame sent\nI1015 00:27:05.279269 3700 log.go:181] (0xc000e9ef20) Data frame received for 3\nI1015 00:27:05.279306 3700 log.go:181] (0xc0005de6e0) (3) Data frame handling\nI1015 00:27:05.279459 3700 log.go:181] (0xc000e9ef20) Data frame received for 5\nI1015 00:27:05.279489 3700 log.go:181] (0xc0004eadc0) (5) Data frame handling\nI1015 00:27:05.281233 3700 log.go:181] (0xc000e9ef20) Data frame received for 1\nI1015 00:27:05.281268 3700 log.go:181] (0xc0005df9a0) (1) Data frame handling\nI1015 00:27:05.281291 3700 log.go:181] (0xc0005df9a0) (1) Data frame sent\nI1015 00:27:05.281325 3700 log.go:181] (0xc000e9ef20) (0xc0005df9a0) Stream removed, broadcasting: 1\nI1015 00:27:05.281369 3700 log.go:181] (0xc000e9ef20) Go away received\nI1015 00:27:05.281681 3700 log.go:181] (0xc000e9ef20) (0xc0005df9a0) Stream removed, broadcasting: 1\nI1015 00:27:05.281700 3700 log.go:181] (0xc000e9ef20) (0xc0005de6e0) Stream removed, broadcasting: 3\nI1015 00:27:05.281709 3700 log.go:181] (0xc000e9ef20) (0xc0004eadc0) Stream removed, broadcasting: 5\n" Oct 15 00:27:05.288: INFO: stdout: "\naffinity-nodeport-transition-65rds\naffinity-nodeport-transition-z987h\naffinity-nodeport-transition-z987h\naffinity-nodeport-transition-z987h\naffinity-nodeport-transition-65rds\naffinity-nodeport-transition-65rds\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-z987h\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-z987h\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-65rds\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4" Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-65rds Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-z987h Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-z987h Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-z987h Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-65rds Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-65rds Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-z987h Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-z987h Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-65rds Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.288: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config exec --namespace=services-3356 execpod-affinitybz4z2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.18:30000/ ; done' Oct 15 00:27:05.630: INFO: stderr: "I1015 00:27:05.441015 3718 log.go:181] (0xc000834f20) (0xc000498b40) Create stream\nI1015 00:27:05.441067 3718 log.go:181] (0xc000834f20) (0xc000498b40) Stream added, broadcasting: 1\nI1015 00:27:05.445614 3718 log.go:181] (0xc000834f20) Reply frame received for 1\nI1015 00:27:05.445659 3718 log.go:181] (0xc000834f20) (0xc00056e500) Create stream\nI1015 00:27:05.445671 3718 log.go:181] (0xc000834f20) (0xc00056e500) Stream added, broadcasting: 3\nI1015 00:27:05.446469 3718 log.go:181] (0xc000834f20) Reply frame received for 3\nI1015 00:27:05.446496 3718 log.go:181] (0xc000834f20) (0xc00056f4a0) Create stream\nI1015 00:27:05.446504 3718 log.go:181] (0xc000834f20) (0xc00056f4a0) Stream added, broadcasting: 5\nI1015 00:27:05.447374 3718 log.go:181] (0xc000834f20) Reply frame received for 5\nI1015 00:27:05.517273 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.517311 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.517324 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.517344 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.517354 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.517365 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.523651 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.523685 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.523711 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.524548 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.524573 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.524597 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.524687 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.524711 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.524729 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.528285 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.528303 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.528312 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.529500 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.529533 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.529571 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.529616 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.529633 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.529647 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.535010 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.535045 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.535071 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.535964 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.535987 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.536000 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.536021 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.536048 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.536073 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.541948 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.541991 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.542017 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.542750 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.542777 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.542797 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.542829 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.542855 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.542893 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.546197 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.546220 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.546232 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.547116 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.547135 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.547155 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.547371 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.547400 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.547438 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.551656 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.551697 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.551726 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.552682 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.552704 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.552725 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.552789 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.552825 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.552964 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\nI1015 00:27:05.560748 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.560773 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.560796 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.561823 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.561855 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.561877 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.561900 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.561933 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.561973 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.566968 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.567015 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.567077 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.567881 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.567910 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.567945 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.567994 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.568038 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.568077 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.575883 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.575896 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.575901 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.576213 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.576236 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.576249 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\nI1015 00:27:05.576260 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.576269 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.576299 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\nI1015 00:27:05.576421 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.576431 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.576436 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.582924 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.582943 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.582948 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.584173 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.584186 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.584192 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.584221 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.584245 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.584277 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.591096 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.591118 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.591133 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.592366 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.592417 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.592445 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.592488 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.592513 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.592552 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.599363 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.599390 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.599416 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.599977 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.599999 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.600018 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.600125 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.600145 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.600160 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.606547 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.606572 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.606584 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.606904 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.606925 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.606936 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\nI1015 00:27:05.606945 3718 log.go:181] (0xc000834f20) Data frame received for 5\n+ echo\n+ curl -qI1015 00:27:05.606958 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.606968 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.606986 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.607015 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.607053 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.610876 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.610895 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.610917 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.611257 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.611274 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.611295 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.611304 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.611314 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.611320 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\nI1015 00:27:05.611325 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.611330 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.611342 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\nI1015 00:27:05.617605 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.617634 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.617658 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.618244 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.618266 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.618278 3718 log.go:181] (0xc00056f4a0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.18:30000/\nI1015 00:27:05.618367 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.618387 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.618401 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.622404 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.622421 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.622428 3718 log.go:181] (0xc00056e500) (3) Data frame sent\nI1015 00:27:05.623106 3718 log.go:181] (0xc000834f20) Data frame received for 5\nI1015 00:27:05.623126 3718 log.go:181] (0xc00056f4a0) (5) Data frame handling\nI1015 00:27:05.623312 3718 log.go:181] (0xc000834f20) Data frame received for 3\nI1015 00:27:05.623336 3718 log.go:181] (0xc00056e500) (3) Data frame handling\nI1015 00:27:05.624951 3718 log.go:181] (0xc000834f20) Data frame received for 1\nI1015 00:27:05.624983 3718 log.go:181] (0xc000498b40) (1) Data frame handling\nI1015 00:27:05.625000 3718 log.go:181] (0xc000498b40) (1) Data frame sent\nI1015 00:27:05.625016 3718 log.go:181] (0xc000834f20) (0xc000498b40) Stream removed, broadcasting: 1\nI1015 00:27:05.625030 3718 log.go:181] (0xc000834f20) Go away received\nI1015 00:27:05.625458 3718 log.go:181] (0xc000834f20) (0xc000498b40) Stream removed, broadcasting: 1\nI1015 00:27:05.625492 3718 log.go:181] (0xc000834f20) (0xc00056e500) Stream removed, broadcasting: 3\nI1015 00:27:05.625516 3718 log.go:181] (0xc000834f20) (0xc00056f4a0) Stream removed, broadcasting: 5\n" Oct 15 00:27:05.631: INFO: stdout: "\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4\naffinity-nodeport-transition-zpbp4" Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Received response from host: affinity-nodeport-transition-zpbp4 Oct 15 00:27:05.631: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-3356, will wait for the garbage collector to delete the pods Oct 15 00:27:05.744: INFO: Deleting ReplicationController affinity-nodeport-transition took: 15.380316ms Oct 15 00:27:06.244: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.21051ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:27:20.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3356" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:30.572 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":280,"skipped":4518,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:27:20.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 15 00:27:21.014: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 15 00:27:23.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318441, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318441, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318441, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318441, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 15 00:27:25.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318441, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318441, loc:(*time.Location)(0x7701840)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318441, loc:(*time.Location)(0x7701840)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63738318441, loc:(*time.Location)(0x7701840)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 15 00:27:28.059: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:27:28.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4776" for this suite. STEP: Destroying namespace "webhook-4776-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.355 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":281,"skipped":4521,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:27:28.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:27:28.873: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Pending, waiting for it to be Running (with Ready = true) Oct 15 00:27:30.877: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Pending, waiting for it to be Running (with Ready = true) Oct 15 00:27:32.878: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:34.878: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:36.878: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:38.878: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:40.878: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:42.878: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:44.877: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:46.879: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:48.877: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:50.886: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = false) Oct 15 00:27:52.877: INFO: The status of Pod test-webserver-f4b5d57c-515c-4a95-b740-05c360326d74 is Running (Ready = true) Oct 15 00:27:52.910: INFO: Container started at 2020-10-15 00:27:31 +0000 UTC, pod became ready at 2020-10-15 00:27:51 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:27:52.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6570" for this suite. • [SLOW TEST:24.142 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":282,"skipped":4532,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:27:52.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:28:09.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1476" for this suite. • [SLOW TEST:16.329 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":283,"skipped":4542,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:28:09.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-2656b6ef-d2e8-4afa-ae3b-54311e8d184f STEP: Creating a pod to test consume secrets Oct 15 00:28:09.346: INFO: Waiting up to 5m0s for pod "pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699" in namespace "secrets-8719" to be "Succeeded or Failed" Oct 15 00:28:09.382: INFO: Pod "pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699": Phase="Pending", Reason="", readiness=false. Elapsed: 35.539183ms Oct 15 00:28:11.386: INFO: Pod "pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039454832s Oct 15 00:28:13.390: INFO: Pod "pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699": Phase="Running", Reason="", readiness=true. Elapsed: 4.043706962s Oct 15 00:28:15.394: INFO: Pod "pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047510235s STEP: Saw pod success Oct 15 00:28:15.394: INFO: Pod "pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699" satisfied condition "Succeeded or Failed" Oct 15 00:28:15.397: INFO: Trying to get logs from node leguer-worker pod pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699 container secret-volume-test: STEP: delete the pod Oct 15 00:28:15.445: INFO: Waiting for pod pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699 to disappear Oct 15 00:28:15.457: INFO: Pod pod-secrets-cd0b4f18-ed67-41d5-a0b4-d0b243fef699 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:28:15.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8719" for this suite. • [SLOW TEST:6.216 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4564,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:28:15.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:28:31.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3293" for this suite. • [SLOW TEST:16.125 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":285,"skipped":4574,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:28:31.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-44ab1810-d24b-4382-8f18-91fd1d9f57bb in namespace container-probe-3906 Oct 15 00:28:35.712: INFO: Started pod busybox-44ab1810-d24b-4382-8f18-91fd1d9f57bb in namespace container-probe-3906 STEP: checking the pod's current state and verifying that restartCount is present Oct 15 00:28:35.722: INFO: Initial restart count of pod busybox-44ab1810-d24b-4382-8f18-91fd1d9f57bb is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:32:36.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3906" for this suite. • [SLOW TEST:244.937 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":286,"skipped":4582,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:32:36.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 15 00:32:36.622: INFO: Waiting up to 5m0s for pod "pod-22af0fdc-40cb-43a5-901a-742c704a0d12" in namespace "emptydir-4771" to be "Succeeded or Failed" Oct 15 00:32:36.656: INFO: Pod "pod-22af0fdc-40cb-43a5-901a-742c704a0d12": Phase="Pending", Reason="", readiness=false. Elapsed: 34.367622ms Oct 15 00:32:38.660: INFO: Pod "pod-22af0fdc-40cb-43a5-901a-742c704a0d12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038098489s Oct 15 00:32:40.664: INFO: Pod "pod-22af0fdc-40cb-43a5-901a-742c704a0d12": Phase="Running", Reason="", readiness=true. Elapsed: 4.042388993s Oct 15 00:32:42.669: INFO: Pod "pod-22af0fdc-40cb-43a5-901a-742c704a0d12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047483206s STEP: Saw pod success Oct 15 00:32:42.669: INFO: Pod "pod-22af0fdc-40cb-43a5-901a-742c704a0d12" satisfied condition "Succeeded or Failed" Oct 15 00:32:42.673: INFO: Trying to get logs from node leguer-worker pod pod-22af0fdc-40cb-43a5-901a-742c704a0d12 container test-container: STEP: delete the pod Oct 15 00:32:42.722: INFO: Waiting for pod pod-22af0fdc-40cb-43a5-901a-742c704a0d12 to disappear Oct 15 00:32:42.732: INFO: Pod pod-22af0fdc-40cb-43a5-901a-742c704a0d12 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:32:42.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4771" for this suite. • [SLOW TEST:6.214 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":287,"skipped":4594,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:32:42.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-3762 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 15 00:32:42.820: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 15 00:32:43.022: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 15 00:32:45.140: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 15 00:32:47.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:32:49.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:32:51.028: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:32:53.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:32:55.026: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:32:57.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:32:59.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:33:01.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:33:03.027: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:33:05.027: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 15 00:33:05.033: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 15 00:33:09.067: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.139:8080/dial?request=hostname&protocol=http&host=10.244.2.70&port=8080&tries=1'] Namespace:pod-network-test-3762 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:33:09.067: INFO: >>> kubeConfig: /root/.kube/config I1015 00:33:09.100369 7 log.go:181] (0xc0001493f0) (0xc0009712c0) Create stream I1015 00:33:09.100412 7 log.go:181] (0xc0001493f0) (0xc0009712c0) Stream added, broadcasting: 1 I1015 00:33:09.103086 7 log.go:181] (0xc0001493f0) Reply frame received for 1 I1015 00:33:09.103137 7 log.go:181] (0xc0001493f0) (0xc000971400) Create stream I1015 00:33:09.103161 7 log.go:181] (0xc0001493f0) (0xc000971400) Stream added, broadcasting: 3 I1015 00:33:09.104064 7 log.go:181] (0xc0001493f0) Reply frame received for 3 I1015 00:33:09.104103 7 log.go:181] (0xc0001493f0) (0xc000971540) Create stream I1015 00:33:09.104119 7 log.go:181] (0xc0001493f0) (0xc000971540) Stream added, broadcasting: 5 I1015 00:33:09.105081 7 log.go:181] (0xc0001493f0) Reply frame received for 5 I1015 00:33:09.213331 7 log.go:181] (0xc0001493f0) Data frame received for 3 I1015 00:33:09.213380 7 log.go:181] (0xc000971400) (3) Data frame handling I1015 00:33:09.213439 7 log.go:181] (0xc000971400) (3) Data frame sent I1015 00:33:09.213922 7 log.go:181] (0xc0001493f0) Data frame received for 3 I1015 00:33:09.213950 7 log.go:181] (0xc000971400) (3) Data frame handling I1015 00:33:09.214015 7 log.go:181] (0xc0001493f0) Data frame received for 5 I1015 00:33:09.214052 7 log.go:181] (0xc000971540) (5) Data frame handling I1015 00:33:09.216241 7 log.go:181] (0xc0001493f0) Data frame received for 1 I1015 00:33:09.216274 7 log.go:181] (0xc0009712c0) (1) Data frame handling I1015 00:33:09.216304 7 log.go:181] (0xc0009712c0) (1) Data frame sent I1015 00:33:09.216332 7 log.go:181] (0xc0001493f0) (0xc0009712c0) Stream removed, broadcasting: 1 I1015 00:33:09.216417 7 log.go:181] (0xc0001493f0) Go away received I1015 00:33:09.216490 7 log.go:181] (0xc0001493f0) (0xc0009712c0) Stream removed, broadcasting: 1 I1015 00:33:09.216527 7 log.go:181] (0xc0001493f0) (0xc000971400) Stream removed, broadcasting: 3 I1015 00:33:09.216562 7 log.go:181] (0xc0001493f0) (0xc000971540) Stream removed, broadcasting: 5 Oct 15 00:33:09.216: INFO: Waiting for responses: map[] Oct 15 00:33:09.221: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.139:8080/dial?request=hostname&protocol=http&host=10.244.1.138&port=8080&tries=1'] Namespace:pod-network-test-3762 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:33:09.221: INFO: >>> kubeConfig: /root/.kube/config I1015 00:33:09.249584 7 log.go:181] (0xc0039469a0) (0xc00421e320) Create stream I1015 00:33:09.249620 7 log.go:181] (0xc0039469a0) (0xc00421e320) Stream added, broadcasting: 1 I1015 00:33:09.251157 7 log.go:181] (0xc0039469a0) Reply frame received for 1 I1015 00:33:09.251182 7 log.go:181] (0xc0039469a0) (0xc0012525a0) Create stream I1015 00:33:09.251189 7 log.go:181] (0xc0039469a0) (0xc0012525a0) Stream added, broadcasting: 3 I1015 00:33:09.251866 7 log.go:181] (0xc0039469a0) Reply frame received for 3 I1015 00:33:09.251893 7 log.go:181] (0xc0039469a0) (0xc004840000) Create stream I1015 00:33:09.251902 7 log.go:181] (0xc0039469a0) (0xc004840000) Stream added, broadcasting: 5 I1015 00:33:09.252672 7 log.go:181] (0xc0039469a0) Reply frame received for 5 I1015 00:33:09.334621 7 log.go:181] (0xc0039469a0) Data frame received for 3 I1015 00:33:09.334679 7 log.go:181] (0xc0012525a0) (3) Data frame handling I1015 00:33:09.334717 7 log.go:181] (0xc0012525a0) (3) Data frame sent I1015 00:33:09.335303 7 log.go:181] (0xc0039469a0) Data frame received for 3 I1015 00:33:09.335316 7 log.go:181] (0xc0012525a0) (3) Data frame handling I1015 00:33:09.335333 7 log.go:181] (0xc0039469a0) Data frame received for 5 I1015 00:33:09.335350 7 log.go:181] (0xc004840000) (5) Data frame handling I1015 00:33:09.336995 7 log.go:181] (0xc0039469a0) Data frame received for 1 I1015 00:33:09.337012 7 log.go:181] (0xc00421e320) (1) Data frame handling I1015 00:33:09.337033 7 log.go:181] (0xc00421e320) (1) Data frame sent I1015 00:33:09.337240 7 log.go:181] (0xc0039469a0) (0xc00421e320) Stream removed, broadcasting: 1 I1015 00:33:09.337288 7 log.go:181] (0xc0039469a0) Go away received I1015 00:33:09.337345 7 log.go:181] (0xc0039469a0) (0xc00421e320) Stream removed, broadcasting: 1 I1015 00:33:09.337384 7 log.go:181] (0xc0039469a0) (0xc0012525a0) Stream removed, broadcasting: 3 I1015 00:33:09.337403 7 log.go:181] (0xc0039469a0) (0xc004840000) Stream removed, broadcasting: 5 Oct 15 00:33:09.337: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:33:09.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3762" for this suite. • [SLOW TEST:26.606 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":288,"skipped":4605,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:33:09.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1015 00:33:49.489760 7 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 15 00:34:51.509: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 15 00:34:51.509: INFO: Deleting pod "simpletest.rc-4r98f" in namespace "gc-6372" Oct 15 00:34:51.527: INFO: Deleting pod "simpletest.rc-5bb2r" in namespace "gc-6372" Oct 15 00:34:51.589: INFO: Deleting pod "simpletest.rc-65fzp" in namespace "gc-6372" Oct 15 00:34:51.656: INFO: Deleting pod "simpletest.rc-8rvsw" in namespace "gc-6372" Oct 15 00:34:52.210: INFO: Deleting pod "simpletest.rc-f5ccz" in namespace "gc-6372" Oct 15 00:34:52.263: INFO: Deleting pod "simpletest.rc-g7lx6" in namespace "gc-6372" Oct 15 00:34:52.624: INFO: Deleting pod "simpletest.rc-qwpf4" in namespace "gc-6372" Oct 15 00:34:52.792: INFO: Deleting pod "simpletest.rc-scsrk" in namespace "gc-6372" Oct 15 00:34:53.073: INFO: Deleting pod "simpletest.rc-xkdlv" in namespace "gc-6372" Oct 15 00:34:53.250: INFO: Deleting pod "simpletest.rc-zmb9q" in namespace "gc-6372" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:34:53.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6372" for this suite. • [SLOW TEST:104.477 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":289,"skipped":4612,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:34:53.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:35:00.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9076" for this suite. • [SLOW TEST:6.505 seconds] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":290,"skipped":4650,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:35:00.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5313 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5313;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5313 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5313;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5313.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5313.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5313.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5313.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5313.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5313.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5313.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5313.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5313.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5313.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5313.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.102.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.102.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.102.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.102.193_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5313 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5313;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5313 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5313;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5313.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5313.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5313.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5313.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5313.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5313.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5313.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5313.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5313.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5313.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5313.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5313.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 193.102.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.102.193_udp@PTR;check="$$(dig +tcp +noall +answer +search 193.102.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.102.193_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 15 00:35:06.628: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.631: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.635: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.638: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.642: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.645: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.649: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.651: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.668: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.671: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.673: INFO: Unable to read jessie_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.676: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.678: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.681: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.683: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.686: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:06.703: INFO: Lookups using dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5313 wheezy_tcp@dns-test-service.dns-5313 wheezy_udp@dns-test-service.dns-5313.svc wheezy_tcp@dns-test-service.dns-5313.svc wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5313 jessie_tcp@dns-test-service.dns-5313 jessie_udp@dns-test-service.dns-5313.svc jessie_tcp@dns-test-service.dns-5313.svc jessie_udp@_http._tcp.dns-test-service.dns-5313.svc jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc] Oct 15 00:35:11.707: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.711: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.718: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.721: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.724: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.727: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.729: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.747: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.750: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.753: INFO: Unable to read jessie_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.755: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.758: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.761: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.764: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.767: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:11.785: INFO: Lookups using dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5313 wheezy_tcp@dns-test-service.dns-5313 wheezy_udp@dns-test-service.dns-5313.svc wheezy_tcp@dns-test-service.dns-5313.svc wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5313 jessie_tcp@dns-test-service.dns-5313 jessie_udp@dns-test-service.dns-5313.svc jessie_tcp@dns-test-service.dns-5313.svc jessie_udp@_http._tcp.dns-test-service.dns-5313.svc jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc] Oct 15 00:35:16.708: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.711: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.718: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.721: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.728: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.746: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.749: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.751: INFO: Unable to read jessie_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.754: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.757: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.760: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.763: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.766: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:16.786: INFO: Lookups using dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5313 wheezy_tcp@dns-test-service.dns-5313 wheezy_udp@dns-test-service.dns-5313.svc wheezy_tcp@dns-test-service.dns-5313.svc wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5313 jessie_tcp@dns-test-service.dns-5313 jessie_udp@dns-test-service.dns-5313.svc jessie_tcp@dns-test-service.dns-5313.svc jessie_udp@_http._tcp.dns-test-service.dns-5313.svc jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc] Oct 15 00:35:21.708: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.712: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.715: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.718: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.721: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.726: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.728: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.746: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.749: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.751: INFO: Unable to read jessie_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.754: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.756: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.759: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.761: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.764: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:21.779: INFO: Lookups using dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5313 wheezy_tcp@dns-test-service.dns-5313 wheezy_udp@dns-test-service.dns-5313.svc wheezy_tcp@dns-test-service.dns-5313.svc wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5313 jessie_tcp@dns-test-service.dns-5313 jessie_udp@dns-test-service.dns-5313.svc jessie_tcp@dns-test-service.dns-5313.svc jessie_udp@_http._tcp.dns-test-service.dns-5313.svc jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc] Oct 15 00:35:26.707: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.711: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.720: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.722: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.725: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.727: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.748: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.750: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.754: INFO: Unable to read jessie_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.757: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.760: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.764: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.767: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.769: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:26.786: INFO: Lookups using dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5313 wheezy_tcp@dns-test-service.dns-5313 wheezy_udp@dns-test-service.dns-5313.svc wheezy_tcp@dns-test-service.dns-5313.svc wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5313 jessie_tcp@dns-test-service.dns-5313 jessie_udp@dns-test-service.dns-5313.svc jessie_tcp@dns-test-service.dns-5313.svc jessie_udp@_http._tcp.dns-test-service.dns-5313.svc jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc] Oct 15 00:35:31.708: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.712: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.716: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.720: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.723: INFO: Unable to read wheezy_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.726: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.730: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.733: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.757: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.759: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.761: INFO: Unable to read jessie_udp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.764: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313 from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.767: INFO: Unable to read jessie_udp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.770: INFO: Unable to read jessie_tcp@dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.773: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.775: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc from pod dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c: the server could not find the requested resource (get pods dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c) Oct 15 00:35:31.793: INFO: Lookups using dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5313 wheezy_tcp@dns-test-service.dns-5313 wheezy_udp@dns-test-service.dns-5313.svc wheezy_tcp@dns-test-service.dns-5313.svc wheezy_udp@_http._tcp.dns-test-service.dns-5313.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5313.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5313 jessie_tcp@dns-test-service.dns-5313 jessie_udp@dns-test-service.dns-5313.svc jessie_tcp@dns-test-service.dns-5313.svc jessie_udp@_http._tcp.dns-test-service.dns-5313.svc jessie_tcp@_http._tcp.dns-test-service.dns-5313.svc] Oct 15 00:35:36.817: INFO: DNS probes using dns-5313/dns-test-2250b345-e15c-4ce7-9d15-4919c7c2b91c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:35:37.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5313" for this suite. • [SLOW TEST:37.085 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":291,"skipped":4676,"failed":0} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:35:37.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-6472 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 15 00:35:37.590: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 15 00:35:37.755: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 15 00:35:39.759: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 15 00:35:41.759: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 15 00:35:43.759: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:35:45.759: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:35:47.759: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:35:49.759: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:35:51.759: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:35:53.760: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 15 00:35:55.759: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 15 00:35:55.766: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 15 00:35:57.771: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 15 00:35:59.770: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 15 00:36:05.925: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.78 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6472 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:36:05.925: INFO: >>> kubeConfig: /root/.kube/config I1015 00:36:05.957798 7 log.go:181] (0xc000149810) (0xc001ae0e60) Create stream I1015 00:36:05.957840 7 log.go:181] (0xc000149810) (0xc001ae0e60) Stream added, broadcasting: 1 I1015 00:36:05.962538 7 log.go:181] (0xc000149810) Reply frame received for 1 I1015 00:36:05.962587 7 log.go:181] (0xc000149810) (0xc001e84000) Create stream I1015 00:36:05.962603 7 log.go:181] (0xc000149810) (0xc001e84000) Stream added, broadcasting: 3 I1015 00:36:05.963864 7 log.go:181] (0xc000149810) Reply frame received for 3 I1015 00:36:05.963900 7 log.go:181] (0xc000149810) (0xc00145e960) Create stream I1015 00:36:05.963911 7 log.go:181] (0xc000149810) (0xc00145e960) Stream added, broadcasting: 5 I1015 00:36:05.965095 7 log.go:181] (0xc000149810) Reply frame received for 5 I1015 00:36:07.060422 7 log.go:181] (0xc000149810) Data frame received for 5 I1015 00:36:07.060458 7 log.go:181] (0xc00145e960) (5) Data frame handling I1015 00:36:07.060476 7 log.go:181] (0xc000149810) Data frame received for 3 I1015 00:36:07.060482 7 log.go:181] (0xc001e84000) (3) Data frame handling I1015 00:36:07.060496 7 log.go:181] (0xc001e84000) (3) Data frame sent I1015 00:36:07.060508 7 log.go:181] (0xc000149810) Data frame received for 3 I1015 00:36:07.060521 7 log.go:181] (0xc001e84000) (3) Data frame handling I1015 00:36:07.062844 7 log.go:181] (0xc000149810) Data frame received for 1 I1015 00:36:07.062898 7 log.go:181] (0xc001ae0e60) (1) Data frame handling I1015 00:36:07.063104 7 log.go:181] (0xc001ae0e60) (1) Data frame sent I1015 00:36:07.063125 7 log.go:181] (0xc000149810) (0xc001ae0e60) Stream removed, broadcasting: 1 I1015 00:36:07.063159 7 log.go:181] (0xc000149810) Go away received I1015 00:36:07.063224 7 log.go:181] (0xc000149810) (0xc001ae0e60) Stream removed, broadcasting: 1 I1015 00:36:07.063246 7 log.go:181] (0xc000149810) (0xc001e84000) Stream removed, broadcasting: 3 I1015 00:36:07.063261 7 log.go:181] (0xc000149810) (0xc00145e960) Stream removed, broadcasting: 5 Oct 15 00:36:07.063: INFO: Found all expected endpoints: [netserver-0] Oct 15 00:36:07.067: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.145 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6472 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 15 00:36:07.067: INFO: >>> kubeConfig: /root/.kube/config I1015 00:36:07.100826 7 log.go:181] (0xc003436840) (0xc001fdd680) Create stream I1015 00:36:07.100955 7 log.go:181] (0xc003436840) (0xc001fdd680) Stream added, broadcasting: 1 I1015 00:36:07.103184 7 log.go:181] (0xc003436840) Reply frame received for 1 I1015 00:36:07.103276 7 log.go:181] (0xc003436840) (0xc00420d900) Create stream I1015 00:36:07.103296 7 log.go:181] (0xc003436840) (0xc00420d900) Stream added, broadcasting: 3 I1015 00:36:07.104210 7 log.go:181] (0xc003436840) Reply frame received for 3 I1015 00:36:07.104258 7 log.go:181] (0xc003436840) (0xc001ae0f00) Create stream I1015 00:36:07.104275 7 log.go:181] (0xc003436840) (0xc001ae0f00) Stream added, broadcasting: 5 I1015 00:36:07.105163 7 log.go:181] (0xc003436840) Reply frame received for 5 I1015 00:36:08.187487 7 log.go:181] (0xc003436840) Data frame received for 3 I1015 00:36:08.187597 7 log.go:181] (0xc00420d900) (3) Data frame handling I1015 00:36:08.187633 7 log.go:181] (0xc00420d900) (3) Data frame sent I1015 00:36:08.187827 7 log.go:181] (0xc003436840) Data frame received for 5 I1015 00:36:08.187850 7 log.go:181] (0xc001ae0f00) (5) Data frame handling I1015 00:36:08.187872 7 log.go:181] (0xc003436840) Data frame received for 3 I1015 00:36:08.187887 7 log.go:181] (0xc00420d900) (3) Data frame handling I1015 00:36:08.189878 7 log.go:181] (0xc003436840) Data frame received for 1 I1015 00:36:08.189906 7 log.go:181] (0xc001fdd680) (1) Data frame handling I1015 00:36:08.189925 7 log.go:181] (0xc001fdd680) (1) Data frame sent I1015 00:36:08.190082 7 log.go:181] (0xc003436840) (0xc001fdd680) Stream removed, broadcasting: 1 I1015 00:36:08.190138 7 log.go:181] (0xc003436840) Go away received I1015 00:36:08.190353 7 log.go:181] (0xc003436840) (0xc001fdd680) Stream removed, broadcasting: 1 I1015 00:36:08.190452 7 log.go:181] (0xc003436840) (0xc00420d900) Stream removed, broadcasting: 3 I1015 00:36:08.190489 7 log.go:181] (0xc003436840) (0xc001ae0f00) Stream removed, broadcasting: 5 Oct 15 00:36:08.190: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:36:08.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6472" for this suite. • [SLOW TEST:30.786 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":292,"skipped":4684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:36:08.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 15 00:36:08.976: INFO: starting watch STEP: patching STEP: updating Oct 15 00:36:08.988: INFO: waiting for watch events with expected annotations Oct 15 00:36:08.988: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:36:09.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4245" for this suite. •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":293,"skipped":4721,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:36:09.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-392532ac-28a6-44b9-a2e3-a691a0f413e1 STEP: Creating configMap with name cm-test-opt-upd-f4f66b3a-75cd-405a-8d40-62b630cc9e6b STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-392532ac-28a6-44b9-a2e3-a691a0f413e1 STEP: Updating configmap cm-test-opt-upd-f4f66b3a-75cd-405a-8d40-62b630cc9e6b STEP: Creating configMap with name cm-test-opt-create-60186ab9-42c6-498c-bac6-ddc0718f02be STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:36:19.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6771" for this suite. • [SLOW TEST:10.242 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":294,"skipped":4735,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:36:19.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 15 00:36:19.513: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 15 00:36:21.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2666 create -f -' Oct 15 00:36:29.479: INFO: stderr: "" Oct 15 00:36:29.479: INFO: stdout: "e2e-test-crd-publish-openapi-4677-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 15 00:36:29.479: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2666 delete e2e-test-crd-publish-openapi-4677-crds test-cr' Oct 15 00:36:29.612: INFO: stderr: "" Oct 15 00:36:29.612: INFO: stdout: "e2e-test-crd-publish-openapi-4677-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 15 00:36:29.612: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2666 apply -f -' Oct 15 00:36:29.944: INFO: stderr: "" Oct 15 00:36:29.944: INFO: stdout: "e2e-test-crd-publish-openapi-4677-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 15 00:36:29.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2666 delete e2e-test-crd-publish-openapi-4677-crds test-cr' Oct 15 00:36:30.061: INFO: stderr: "" Oct 15 00:36:30.061: INFO: stdout: "e2e-test-crd-publish-openapi-4677-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 15 00:36:30.061: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4677-crds' Oct 15 00:36:30.351: INFO: stderr: "" Oct 15 00:36:30.351: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4677-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:36:33.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2666" for this suite. • [SLOW TEST:13.928 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":295,"skipped":4740,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:36:33.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 15 00:36:33.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1208' Oct 15 00:36:33.529: INFO: stderr: "" Oct 15 00:36:33.529: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Oct 15 00:36:33.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1208' Oct 15 00:36:39.486: INFO: stderr: "" Oct 15 00:36:39.486: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:36:39.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1208" for this suite. • [SLOW TEST:6.148 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":296,"skipped":4768,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:36:39.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Oct 15 00:36:39.618: INFO: created test-event-1 Oct 15 00:36:39.624: INFO: created test-event-2 Oct 15 00:36:39.630: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 15 00:36:39.636: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 15 00:36:39.673: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:36:39.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4120" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":297,"skipped":4783,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:36:39.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 15 00:36:39.753: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 15 00:36:39.772: INFO: Waiting for terminating namespaces to be deleted... Oct 15 00:36:39.791: INFO: Logging pods the apiserver thinks is on node leguer-worker before test Oct 15 00:36:39.796: INFO: kindnet-lc95n from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 15 00:36:39.796: INFO: Container kindnet-cni ready: true, restart count 0 Oct 15 00:36:39.796: INFO: kube-proxy-bmzvg from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 15 00:36:39.796: INFO: Container kube-proxy ready: true, restart count 0 Oct 15 00:36:39.796: INFO: Logging pods the apiserver thinks is on node leguer-worker2 before test Oct 15 00:36:39.800: INFO: kindnet-nffr7 from kube-system started at 2020-10-04 09:51:31 +0000 UTC (1 container statuses recorded) Oct 15 00:36:39.800: INFO: Container kindnet-cni ready: true, restart count 0 Oct 15 00:36:39.800: INFO: kube-proxy-sxhc5 from kube-system started at 2020-10-04 09:51:30 +0000 UTC (1 container statuses recorded) Oct 15 00:36:39.800: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-597278c6-8bec-47f9-ad20-3c140e7e2d42 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-597278c6-8bec-47f9-ad20-3c140e7e2d42 off the node leguer-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-597278c6-8bec-47f9-ad20-3c140e7e2d42 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:36:48.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-19" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.425 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":298,"skipped":4802,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:36:48.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Oct 15 00:38:48.729: INFO: Successfully updated pod "var-expansion-8831a472-379f-4707-a04b-61475b0c49d1" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 15 00:38:52.752: INFO: Deleting pod "var-expansion-8831a472-379f-4707-a04b-61475b0c49d1" in namespace "var-expansion-5465" Oct 15 00:38:52.758: INFO: Wait up to 5m0s for pod "var-expansion-8831a472-379f-4707-a04b-61475b0c49d1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:39:26.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5465" for this suite. • [SLOW TEST:158.679 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":299,"skipped":4820,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:39:26.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:39:37.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8847" for this suite. • [SLOW TEST:11.184 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":300,"skipped":4849,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:39:37.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Oct 15 00:39:38.022: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 15 00:39:38.022: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3062' Oct 15 00:39:38.506: INFO: stderr: "" Oct 15 00:39:38.506: INFO: stdout: "service/agnhost-replica created\n" Oct 15 00:39:38.506: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 15 00:39:38.506: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3062' Oct 15 00:39:38.813: INFO: stderr: "" Oct 15 00:39:38.813: INFO: stdout: "service/agnhost-primary created\n" Oct 15 00:39:38.814: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 15 00:39:38.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3062' Oct 15 00:39:39.161: INFO: stderr: "" Oct 15 00:39:39.161: INFO: stdout: "service/frontend created\n" Oct 15 00:39:39.161: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 15 00:39:39.161: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3062' Oct 15 00:39:39.472: INFO: stderr: "" Oct 15 00:39:39.472: INFO: stdout: "deployment.apps/frontend created\n" Oct 15 00:39:39.473: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 15 00:39:39.473: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3062' Oct 15 00:39:39.856: INFO: stderr: "" Oct 15 00:39:39.856: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 15 00:39:39.856: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 15 00:39:39.856: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3062' Oct 15 00:39:40.185: INFO: stderr: "" Oct 15 00:39:40.185: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 15 00:39:40.185: INFO: Waiting for all frontend pods to be Running. Oct 15 00:39:50.235: INFO: Waiting for frontend to serve content. Oct 15 00:39:50.247: INFO: Trying to add a new entry to the guestbook. Oct 15 00:39:50.255: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 15 00:39:50.263: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3062' Oct 15 00:39:50.422: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 15 00:39:50.422: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 15 00:39:50.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3062' Oct 15 00:39:50.578: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 15 00:39:50.578: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 15 00:39:50.579: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3062' Oct 15 00:39:50.722: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 15 00:39:50.722: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 15 00:39:50.722: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3062' Oct 15 00:39:50.822: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 15 00:39:50.822: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 15 00:39:50.822: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3062' Oct 15 00:39:50.930: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 15 00:39:50.930: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 15 00:39:50.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:43573 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3062' Oct 15 00:39:51.037: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 15 00:39:51.037: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:39:51.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3062" for this suite. • [SLOW TEST:13.069 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":301,"skipped":4859,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:39:51.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 15 00:39:58.786: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:39:58.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3063" for this suite. • [SLOW TEST:7.815 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":302,"skipped":4870,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 15 00:39:58.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 15 00:39:58.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-368" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":303,"skipped":4921,"failed":0} SSSSSSSSOct 15 00:39:58.992: INFO: Running AfterSuite actions on all nodes Oct 15 00:39:58.992: INFO: Running AfterSuite actions on node 1 Oct 15 00:39:58.992: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 6182.369 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS