I1005 09:40:23.934813 10 test_context.go:429] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I1005 09:40:23.941286 10 e2e.go:129] Starting e2e run "c3f5ad12-076c-4084-9d39-b6e5f4f3a3a2" on Ginkgo node 1 {"msg":"Test Suite starting","total":303,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1601890807 - Will randomize all specs Will run 303 of 5232 specs Oct 5 09:40:24.539: INFO: >>> kubeConfig: /root/.kube/config Oct 5 09:40:24.587: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Oct 5 09:40:24.782: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Oct 5 09:40:24.978: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Oct 5 09:40:24.978: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Oct 5 09:40:24.979: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Oct 5 09:40:25.020: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Oct 5 09:40:25.020: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Oct 5 09:40:25.020: INFO: e2e test version: v1.19.2 Oct 5 09:40:25.024: INFO: kube-apiserver version: v1.19.0 Oct 5 09:40:25.025: INFO: >>> kubeConfig: /root/.kube/config Oct 5 09:40:25.047: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:40:25.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook Oct 5 09:40:25.158: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 09:40:30.302: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 09:40:32.570: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737487630, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737487630, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737487630, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737487630, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 09:40:34.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737487630, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737487630, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737487630, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737487630, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 09:40:37.622: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Oct 5 09:40:37.694: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:40:37.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2022" for this suite. STEP: Destroying namespace "webhook-2022-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.819 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":303,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:40:37.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-1510/configmap-test-10fa8307-f812-489a-a7f2-ac9d97f8e1a1 STEP: Creating a pod to test consume configMaps Oct 5 09:40:38.069: INFO: Waiting up to 5m0s for pod "pod-configmaps-c1a6adbb-1d84-4cfb-a83e-f0fe84b092cc" in namespace "configmap-1510" to be "Succeeded or Failed" Oct 5 09:40:38.185: INFO: Pod "pod-configmaps-c1a6adbb-1d84-4cfb-a83e-f0fe84b092cc": Phase="Pending", Reason="", readiness=false. Elapsed: 115.538493ms Oct 5 09:40:40.202: INFO: Pod "pod-configmaps-c1a6adbb-1d84-4cfb-a83e-f0fe84b092cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131918116s Oct 5 09:40:42.214: INFO: Pod "pod-configmaps-c1a6adbb-1d84-4cfb-a83e-f0fe84b092cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.14391248s STEP: Saw pod success Oct 5 09:40:42.214: INFO: Pod "pod-configmaps-c1a6adbb-1d84-4cfb-a83e-f0fe84b092cc" satisfied condition "Succeeded or Failed" Oct 5 09:40:42.219: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-c1a6adbb-1d84-4cfb-a83e-f0fe84b092cc container env-test: STEP: delete the pod Oct 5 09:40:42.261: INFO: Waiting for pod pod-configmaps-c1a6adbb-1d84-4cfb-a83e-f0fe84b092cc to disappear Oct 5 09:40:42.266: INFO: Pod pod-configmaps-c1a6adbb-1d84-4cfb-a83e-f0fe84b092cc no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:40:42.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1510" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":303,"completed":2,"skipped":19,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:40:42.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Oct 5 09:40:42.414: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-a a95f385e-648f-4c98-a3f9-2343f50d66cf 3147484 0 2020-10-05 09:40:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 09:40:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 09:40:42.421: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-a a95f385e-648f-4c98-a3f9-2343f50d66cf 3147484 0 2020-10-05 09:40:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 09:40:42 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Oct 5 09:40:52.438: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-a a95f385e-648f-4c98-a3f9-2343f50d66cf 3147545 0 2020-10-05 09:40:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 09:40:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 09:40:52.440: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-a a95f385e-648f-4c98-a3f9-2343f50d66cf 3147545 0 2020-10-05 09:40:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 09:40:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Oct 5 09:41:02.456: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-a a95f385e-648f-4c98-a3f9-2343f50d66cf 3147579 0 2020-10-05 09:40:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 09:41:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 09:41:02.458: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-a a95f385e-648f-4c98-a3f9-2343f50d66cf 3147579 0 2020-10-05 09:40:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 09:41:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Oct 5 09:41:12.505: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-a a95f385e-648f-4c98-a3f9-2343f50d66cf 3147650 0 2020-10-05 09:40:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 09:41:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 09:41:12.505: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-a a95f385e-648f-4c98-a3f9-2343f50d66cf 3147650 0 2020-10-05 09:40:42 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-10-05 09:41:02 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Oct 5 09:41:22.535: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-b 4a63d0f9-6a74-4d06-9a23-af8687ae4f4a 3147711 0 2020-10-05 09:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-05 09:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 09:41:22.536: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-b 4a63d0f9-6a74-4d06-9a23-af8687ae4f4a 3147711 0 2020-10-05 09:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-05 09:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Oct 5 09:41:32.548: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-b 4a63d0f9-6a74-4d06-9a23-af8687ae4f4a 3147753 0 2020-10-05 09:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-05 09:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 09:41:32.549: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8094 /api/v1/namespaces/watch-8094/configmaps/e2e-watch-test-configmap-b 4a63d0f9-6a74-4d06-9a23-af8687ae4f4a 3147753 0 2020-10-05 09:41:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-10-05 09:41:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:41:42.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8094" for this suite. • [SLOW TEST:60.286 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":303,"completed":3,"skipped":38,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:41:42.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:41:42.637: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 5 09:41:53.354: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5818 create -f -' Oct 5 09:41:58.789: INFO: stderr: "" Oct 5 09:41:58.789: INFO: stdout: "e2e-test-crd-publish-openapi-3024-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 5 09:41:58.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5818 delete e2e-test-crd-publish-openapi-3024-crds test-cr' Oct 5 09:42:00.054: INFO: stderr: "" Oct 5 09:42:00.054: INFO: stdout: "e2e-test-crd-publish-openapi-3024-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Oct 5 09:42:00.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5818 apply -f -' Oct 5 09:42:02.734: INFO: stderr: "" Oct 5 09:42:02.734: INFO: stdout: "e2e-test-crd-publish-openapi-3024-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Oct 5 09:42:02.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5818 delete e2e-test-crd-publish-openapi-3024-crds test-cr' Oct 5 09:42:03.968: INFO: stderr: "" Oct 5 09:42:03.968: INFO: stdout: "e2e-test-crd-publish-openapi-3024-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Oct 5 09:42:03.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3024-crds' Oct 5 09:42:06.252: INFO: stderr: "" Oct 5 09:42:06.252: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3024-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:42:16.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5818" for this suite. • [SLOW TEST:34.351 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":303,"completed":4,"skipped":42,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:42:16.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-9n72 STEP: Creating a pod to test atomic-volume-subpath Oct 5 09:42:17.022: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9n72" in namespace "subpath-3667" to be "Succeeded or Failed" Oct 5 09:42:17.069: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Pending", Reason="", readiness=false. Elapsed: 46.89321ms Oct 5 09:42:19.109: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086500099s Oct 5 09:42:21.119: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 4.097113876s Oct 5 09:42:23.131: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 6.108492751s Oct 5 09:42:25.167: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 8.145161123s Oct 5 09:42:27.175: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 10.152688675s Oct 5 09:42:29.182: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 12.159543425s Oct 5 09:42:31.191: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 14.169129554s Oct 5 09:42:33.222: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 16.199415285s Oct 5 09:42:35.229: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 18.206806059s Oct 5 09:42:37.237: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 20.214371776s Oct 5 09:42:39.244: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Running", Reason="", readiness=true. Elapsed: 22.221200387s Oct 5 09:42:41.287: INFO: Pod "pod-subpath-test-configmap-9n72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.264749399s STEP: Saw pod success Oct 5 09:42:41.287: INFO: Pod "pod-subpath-test-configmap-9n72" satisfied condition "Succeeded or Failed" Oct 5 09:42:41.305: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-9n72 container test-container-subpath-configmap-9n72: STEP: delete the pod Oct 5 09:42:41.439: INFO: Waiting for pod pod-subpath-test-configmap-9n72 to disappear Oct 5 09:42:41.447: INFO: Pod pod-subpath-test-configmap-9n72 no longer exists STEP: Deleting pod pod-subpath-test-configmap-9n72 Oct 5 09:42:41.447: INFO: Deleting pod "pod-subpath-test-configmap-9n72" in namespace "subpath-3667" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:42:41.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3667" for this suite. • [SLOW TEST:24.542 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":303,"completed":5,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:42:41.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:42:41.773: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Oct 5 09:42:43.884: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:42:43.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7435" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":303,"completed":6,"skipped":118,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:42:43.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Oct 5 09:42:51.511: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:42:51.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1452" for this suite. • [SLOW TEST:7.955 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":303,"completed":7,"skipped":123,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:42:51.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 09:42:52.180: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641" in namespace "downward-api-8687" to be "Succeeded or Failed" Oct 5 09:42:52.210: INFO: Pod "downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641": Phase="Pending", Reason="", readiness=false. Elapsed: 30.450359ms Oct 5 09:42:54.319: INFO: Pod "downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139435073s Oct 5 09:42:56.335: INFO: Pod "downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641": Phase="Running", Reason="", readiness=true. Elapsed: 4.154516606s Oct 5 09:42:58.342: INFO: Pod "downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.161896423s STEP: Saw pod success Oct 5 09:42:58.342: INFO: Pod "downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641" satisfied condition "Succeeded or Failed" Oct 5 09:42:58.347: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641 container client-container: STEP: delete the pod Oct 5 09:42:58.395: INFO: Waiting for pod downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641 to disappear Oct 5 09:42:58.462: INFO: Pod downwardapi-volume-a2388e96-2eca-4e9d-a369-dbcef9a87641 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:42:58.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8687" for this suite. • [SLOW TEST:6.863 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":8,"skipped":129,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:42:58.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 5 09:43:09.231: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 5 09:43:09.283: INFO: Pod pod-with-poststart-http-hook still exists Oct 5 09:43:11.284: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 5 09:43:11.293: INFO: Pod pod-with-poststart-http-hook still exists Oct 5 09:43:13.284: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Oct 5 09:43:13.292: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:43:13.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-404" for this suite. • [SLOW TEST:14.490 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":303,"completed":9,"skipped":168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:43:13.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Oct 5 09:43:13.430: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8934 /api/v1/namespaces/watch-8934/configmaps/e2e-watch-test-watch-closed 88c09f3d-1dba-43dc-bbf3-5eed7adf82ab 3148488 0 2020-10-05 09:43:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-05 09:43:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 09:43:13.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8934 /api/v1/namespaces/watch-8934/configmaps/e2e-watch-test-watch-closed 88c09f3d-1dba-43dc-bbf3-5eed7adf82ab 3148491 0 2020-10-05 09:43:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-05 09:43:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Oct 5 09:43:13.466: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8934 /api/v1/namespaces/watch-8934/configmaps/e2e-watch-test-watch-closed 88c09f3d-1dba-43dc-bbf3-5eed7adf82ab 3148492 0 2020-10-05 09:43:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-05 09:43:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 09:43:13.469: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8934 /api/v1/namespaces/watch-8934/configmaps/e2e-watch-test-watch-closed 88c09f3d-1dba-43dc-bbf3-5eed7adf82ab 3148493 0 2020-10-05 09:43:13 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-10-05 09:43:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:43:13.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8934" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":303,"completed":10,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:43:13.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-secret-j6ds STEP: Creating a pod to test atomic-volume-subpath Oct 5 09:43:13.622: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-j6ds" in namespace "subpath-2349" to be "Succeeded or Failed" Oct 5 09:43:13.627: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Pending", Reason="", readiness=false. Elapsed: 4.982215ms Oct 5 09:43:15.821: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Pending", Reason="", readiness=false. Elapsed: 2.198685589s Oct 5 09:43:17.827: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 4.204919904s Oct 5 09:43:19.943: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 6.321152413s Oct 5 09:43:21.951: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 8.329204608s Oct 5 09:43:23.958: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 10.33618496s Oct 5 09:43:25.966: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 12.343400865s Oct 5 09:43:27.974: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 14.35189384s Oct 5 09:43:30.013: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 16.391070292s Oct 5 09:43:32.021: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 18.399139878s Oct 5 09:43:34.028: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 20.405669541s Oct 5 09:43:36.037: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Running", Reason="", readiness=true. Elapsed: 22.414708398s Oct 5 09:43:38.049: INFO: Pod "pod-subpath-test-secret-j6ds": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.426662502s STEP: Saw pod success Oct 5 09:43:38.049: INFO: Pod "pod-subpath-test-secret-j6ds" satisfied condition "Succeeded or Failed" Oct 5 09:43:38.054: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-j6ds container test-container-subpath-secret-j6ds: STEP: delete the pod Oct 5 09:43:38.141: INFO: Waiting for pod pod-subpath-test-secret-j6ds to disappear Oct 5 09:43:38.186: INFO: Pod pod-subpath-test-secret-j6ds no longer exists STEP: Deleting pod pod-subpath-test-secret-j6ds Oct 5 09:43:38.186: INFO: Deleting pod "pod-subpath-test-secret-j6ds" in namespace "subpath-2349" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:43:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2349" for this suite. • [SLOW TEST:24.709 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":303,"completed":11,"skipped":223,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:43:38.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on node default medium Oct 5 09:43:38.361: INFO: Waiting up to 5m0s for pod "pod-946db501-128c-4db6-8d08-2214faf8793b" in namespace "emptydir-1093" to be "Succeeded or Failed" Oct 5 09:43:38.371: INFO: Pod "pod-946db501-128c-4db6-8d08-2214faf8793b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.647469ms Oct 5 09:43:40.426: INFO: Pod "pod-946db501-128c-4db6-8d08-2214faf8793b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064981922s Oct 5 09:43:42.444: INFO: Pod "pod-946db501-128c-4db6-8d08-2214faf8793b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082561622s STEP: Saw pod success Oct 5 09:43:42.444: INFO: Pod "pod-946db501-128c-4db6-8d08-2214faf8793b" satisfied condition "Succeeded or Failed" Oct 5 09:43:42.449: INFO: Trying to get logs from node kali-worker pod pod-946db501-128c-4db6-8d08-2214faf8793b container test-container: STEP: delete the pod Oct 5 09:43:42.495: INFO: Waiting for pod pod-946db501-128c-4db6-8d08-2214faf8793b to disappear Oct 5 09:43:42.527: INFO: Pod pod-946db501-128c-4db6-8d08-2214faf8793b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:43:42.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1093" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":12,"skipped":230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:43:42.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 09:43:42.660: INFO: Waiting up to 5m0s for pod "downward-api-2be5f625-0ddf-443a-837a-f60330150068" in namespace "downward-api-3389" to be "Succeeded or Failed" Oct 5 09:43:42.738: INFO: Pod "downward-api-2be5f625-0ddf-443a-837a-f60330150068": Phase="Pending", Reason="", readiness=false. Elapsed: 78.611925ms Oct 5 09:43:44.834: INFO: Pod "downward-api-2be5f625-0ddf-443a-837a-f60330150068": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174197226s Oct 5 09:43:46.841: INFO: Pod "downward-api-2be5f625-0ddf-443a-837a-f60330150068": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.181101633s STEP: Saw pod success Oct 5 09:43:46.841: INFO: Pod "downward-api-2be5f625-0ddf-443a-837a-f60330150068" satisfied condition "Succeeded or Failed" Oct 5 09:43:46.846: INFO: Trying to get logs from node kali-worker2 pod downward-api-2be5f625-0ddf-443a-837a-f60330150068 container dapi-container: STEP: delete the pod Oct 5 09:43:46.884: INFO: Waiting for pod downward-api-2be5f625-0ddf-443a-837a-f60330150068 to disappear Oct 5 09:43:46.934: INFO: Pod downward-api-2be5f625-0ddf-443a-837a-f60330150068 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:43:46.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3389" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":303,"completed":13,"skipped":265,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:43:46.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-ff7150cc-0253-4632-b4b4-963c09d7e031 STEP: Creating a pod to test consume configMaps Oct 5 09:43:47.393: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b55c8cc-ede4-433e-8d0e-91610fca9da2" in namespace "configmap-2708" to be "Succeeded or Failed" Oct 5 09:43:47.402: INFO: Pod "pod-configmaps-5b55c8cc-ede4-433e-8d0e-91610fca9da2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.898727ms Oct 5 09:43:49.411: INFO: Pod "pod-configmaps-5b55c8cc-ede4-433e-8d0e-91610fca9da2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01749887s Oct 5 09:43:51.418: INFO: Pod "pod-configmaps-5b55c8cc-ede4-433e-8d0e-91610fca9da2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024610328s STEP: Saw pod success Oct 5 09:43:51.418: INFO: Pod "pod-configmaps-5b55c8cc-ede4-433e-8d0e-91610fca9da2" satisfied condition "Succeeded or Failed" Oct 5 09:43:51.423: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-5b55c8cc-ede4-433e-8d0e-91610fca9da2 container configmap-volume-test: STEP: delete the pod Oct 5 09:43:51.459: INFO: Waiting for pod pod-configmaps-5b55c8cc-ede4-433e-8d0e-91610fca9da2 to disappear Oct 5 09:43:51.473: INFO: Pod pod-configmaps-5b55c8cc-ede4-433e-8d0e-91610fca9da2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:43:51.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2708" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":14,"skipped":266,"failed":0} ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:43:51.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service multi-endpoint-test in namespace services-4629 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4629 to expose endpoints map[] Oct 5 09:43:51.721: INFO: successfully validated that service multi-endpoint-test in namespace services-4629 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-4629 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4629 to expose endpoints map[pod1:[100]] Oct 5 09:43:54.824: INFO: successfully validated that service multi-endpoint-test in namespace services-4629 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-4629 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4629 to expose endpoints map[pod1:[100] pod2:[101]] Oct 5 09:43:57.935: INFO: successfully validated that service multi-endpoint-test in namespace services-4629 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-4629 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4629 to expose endpoints map[pod2:[101]] Oct 5 09:43:58.013: INFO: successfully validated that service multi-endpoint-test in namespace services-4629 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-4629 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4629 to expose endpoints map[] Oct 5 09:43:58.103: INFO: successfully validated that service multi-endpoint-test in namespace services-4629 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:43:58.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4629" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:6.993 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":303,"completed":15,"skipped":266,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:43:58.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-6001 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating stateful set ss in namespace statefulset-6001 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6001 Oct 5 09:43:58.649: INFO: Found 0 stateful pods, waiting for 1 Oct 5 09:44:08.658: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Oct 5 09:44:08.663: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6001 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 09:44:10.204: INFO: stderr: "I1005 09:44:10.043151 133 log.go:181] (0x2eb0000) (0x2eb0070) Create stream\nI1005 09:44:10.047152 133 log.go:181] (0x2eb0000) (0x2eb0070) Stream added, broadcasting: 1\nI1005 09:44:10.059848 133 log.go:181] (0x2eb0000) Reply frame received for 1\nI1005 09:44:10.060751 133 log.go:181] (0x2eb0000) (0x25127e0) Create stream\nI1005 09:44:10.060956 133 log.go:181] (0x2eb0000) (0x25127e0) Stream added, broadcasting: 3\nI1005 09:44:10.063042 133 log.go:181] (0x2eb0000) Reply frame received for 3\nI1005 09:44:10.063559 133 log.go:181] (0x2eb0000) (0x27c4070) Create stream\nI1005 09:44:10.063672 133 log.go:181] (0x2eb0000) (0x27c4070) Stream added, broadcasting: 5\nI1005 09:44:10.065410 133 log.go:181] (0x2eb0000) Reply frame received for 5\nI1005 09:44:10.139707 133 log.go:181] (0x2eb0000) Data frame received for 5\nI1005 09:44:10.139938 133 log.go:181] (0x27c4070) (5) Data frame handling\nI1005 09:44:10.140343 133 log.go:181] (0x27c4070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 09:44:10.187370 133 log.go:181] (0x2eb0000) Data frame received for 3\nI1005 09:44:10.187628 133 log.go:181] (0x25127e0) (3) Data frame handling\nI1005 09:44:10.187759 133 log.go:181] (0x2eb0000) Data frame received for 5\nI1005 09:44:10.187924 133 log.go:181] (0x27c4070) (5) Data frame handling\nI1005 09:44:10.188009 133 log.go:181] (0x25127e0) (3) Data frame sent\nI1005 09:44:10.188111 133 log.go:181] (0x2eb0000) Data frame received for 3\nI1005 09:44:10.188207 133 log.go:181] (0x25127e0) (3) Data frame handling\nI1005 09:44:10.189399 133 log.go:181] (0x2eb0000) Data frame received for 1\nI1005 09:44:10.189526 133 log.go:181] (0x2eb0070) (1) Data frame handling\nI1005 09:44:10.189681 133 log.go:181] (0x2eb0070) (1) Data frame sent\nI1005 09:44:10.190402 133 log.go:181] (0x2eb0000) (0x2eb0070) Stream removed, broadcasting: 1\nI1005 09:44:10.192151 133 log.go:181] (0x2eb0000) Go away received\nI1005 09:44:10.195652 133 log.go:181] (0x2eb0000) (0x2eb0070) Stream removed, broadcasting: 1\nI1005 09:44:10.195844 133 log.go:181] (0x2eb0000) (0x25127e0) Stream removed, broadcasting: 3\nI1005 09:44:10.196002 133 log.go:181] (0x2eb0000) (0x27c4070) Stream removed, broadcasting: 5\n" Oct 5 09:44:10.205: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 09:44:10.205: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 09:44:10.212: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 5 09:44:20.243: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 5 09:44:20.243: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 09:44:20.270: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:20.271: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:20.272: INFO: ss-1 Pending [] Oct 5 09:44:20.272: INFO: Oct 5 09:44:20.272: INFO: StatefulSet ss has not reached scale 3, at 2 Oct 5 09:44:21.282: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987543244s Oct 5 09:44:22.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.97687448s Oct 5 09:44:23.627: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.643440032s Oct 5 09:44:24.640: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.632032714s Oct 5 09:44:25.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.618767354s Oct 5 09:44:26.665: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.606550947s Oct 5 09:44:27.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.594464341s Oct 5 09:44:28.689: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.583184145s Oct 5 09:44:29.701: INFO: Verifying statefulset ss doesn't scale past 3 for another 570.387961ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6001 Oct 5 09:44:30.711: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6001 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 09:44:32.254: INFO: stderr: "I1005 09:44:32.065071 154 log.go:181] (0x2e0f1f0) (0x2e0f260) Create stream\nI1005 09:44:32.068023 154 log.go:181] (0x2e0f1f0) (0x2e0f260) Stream added, broadcasting: 1\nI1005 09:44:32.080234 154 log.go:181] (0x2e0f1f0) Reply frame received for 1\nI1005 09:44:32.080960 154 log.go:181] (0x2e0f1f0) (0x247dd50) Create stream\nI1005 09:44:32.081042 154 log.go:181] (0x2e0f1f0) (0x247dd50) Stream added, broadcasting: 3\nI1005 09:44:32.082637 154 log.go:181] (0x2e0f1f0) Reply frame received for 3\nI1005 09:44:32.083029 154 log.go:181] (0x2e0f1f0) (0x2e0f420) Create stream\nI1005 09:44:32.083128 154 log.go:181] (0x2e0f1f0) (0x2e0f420) Stream added, broadcasting: 5\nI1005 09:44:32.085101 154 log.go:181] (0x2e0f1f0) Reply frame received for 5\nI1005 09:44:32.179587 154 log.go:181] (0x2e0f1f0) Data frame received for 3\nI1005 09:44:32.179942 154 log.go:181] (0x2e0f1f0) Data frame received for 1\nI1005 09:44:32.180271 154 log.go:181] (0x2e0f1f0) Data frame received for 5\nI1005 09:44:32.180459 154 log.go:181] (0x2e0f420) (5) Data frame handling\nI1005 09:44:32.180613 154 log.go:181] (0x247dd50) (3) Data frame handling\nI1005 09:44:32.181070 154 log.go:181] (0x2e0f260) (1) Data frame handling\nI1005 09:44:32.182219 154 log.go:181] (0x2e0f260) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 09:44:32.207246 154 log.go:181] (0x2e0f420) (5) Data frame sent\nI1005 09:44:32.209039 154 log.go:181] (0x2e0f1f0) Data frame received for 5\nI1005 09:44:32.209156 154 log.go:181] (0x2e0f420) (5) Data frame handling\nI1005 09:44:32.238645 154 log.go:181] (0x2e0f1f0) (0x2e0f260) Stream removed, broadcasting: 1\nI1005 09:44:32.241549 154 log.go:181] (0x247dd50) (3) Data frame sent\nI1005 09:44:32.242809 154 log.go:181] (0x2e0f1f0) Data frame received for 3\nI1005 09:44:32.242886 154 log.go:181] (0x247dd50) (3) Data frame handling\nI1005 09:44:32.243151 154 log.go:181] (0x2e0f1f0) Go away received\nI1005 09:44:32.246477 154 log.go:181] (0x2e0f1f0) (0x2e0f260) Stream removed, broadcasting: 1\nI1005 09:44:32.246757 154 log.go:181] (0x2e0f1f0) (0x247dd50) Stream removed, broadcasting: 3\nI1005 09:44:32.246983 154 log.go:181] (0x2e0f1f0) (0x2e0f420) Stream removed, broadcasting: 5\n" Oct 5 09:44:32.254: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 09:44:32.254: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 09:44:32.255: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6001 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 09:44:33.796: INFO: stderr: "I1005 09:44:33.649187 174 log.go:181] (0x2ea2000) (0x2ea2070) Create stream\nI1005 09:44:33.652604 174 log.go:181] (0x2ea2000) (0x2ea2070) Stream added, broadcasting: 1\nI1005 09:44:33.662910 174 log.go:181] (0x2ea2000) Reply frame received for 1\nI1005 09:44:33.664032 174 log.go:181] (0x2ea2000) (0x2ea2150) Create stream\nI1005 09:44:33.664170 174 log.go:181] (0x2ea2000) (0x2ea2150) Stream added, broadcasting: 3\nI1005 09:44:33.666287 174 log.go:181] (0x2ea2000) Reply frame received for 3\nI1005 09:44:33.666737 174 log.go:181] (0x2ea2000) (0x29663f0) Create stream\nI1005 09:44:33.666872 174 log.go:181] (0x2ea2000) (0x29663f0) Stream added, broadcasting: 5\nI1005 09:44:33.668733 174 log.go:181] (0x2ea2000) Reply frame received for 5\nI1005 09:44:33.759063 174 log.go:181] (0x2ea2000) Data frame received for 3\nI1005 09:44:33.759267 174 log.go:181] (0x2ea2150) (3) Data frame handling\nI1005 09:44:33.760285 174 log.go:181] (0x2ea2150) (3) Data frame sent\nI1005 09:44:33.765482 174 log.go:181] (0x2ea2000) Data frame received for 5\nI1005 09:44:33.768342 174 log.go:181] (0x29663f0) (5) Data frame handling\nI1005 09:44:33.768464 174 log.go:181] (0x2ea2000) Data frame received for 3\nI1005 09:44:33.768578 174 log.go:181] (0x2ea2000) Data frame received for 1\nI1005 09:44:33.768668 174 log.go:181] (0x2ea2070) (1) Data frame handling\nI1005 09:44:33.769014 174 log.go:181] (0x2ea2150) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1005 09:44:33.769364 174 log.go:181] (0x2ea2070) (1) Data frame sent\nI1005 09:44:33.770263 174 log.go:181] (0x2ea2000) (0x2ea2070) Stream removed, broadcasting: 1\nI1005 09:44:33.770461 174 log.go:181] (0x29663f0) (5) Data frame sent\nI1005 09:44:33.770614 174 log.go:181] (0x2ea2000) Data frame received for 5\nI1005 09:44:33.770694 174 log.go:181] (0x29663f0) (5) Data frame handling\nI1005 09:44:33.773785 174 log.go:181] (0x2ea2000) Go away received\nI1005 09:44:33.785237 174 log.go:181] (0x2ea2000) (0x2ea2070) Stream removed, broadcasting: 1\nI1005 09:44:33.787146 174 log.go:181] (0x2ea2000) (0x2ea2150) Stream removed, broadcasting: 3\nI1005 09:44:33.788567 174 log.go:181] (0x2ea2000) (0x29663f0) Stream removed, broadcasting: 5\n" Oct 5 09:44:33.796: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 09:44:33.796: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 09:44:33.797: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6001 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 09:44:35.333: INFO: stderr: "I1005 09:44:35.213740 194 log.go:181] (0x30ae1c0) (0x30ae230) Create stream\nI1005 09:44:35.217405 194 log.go:181] (0x30ae1c0) (0x30ae230) Stream added, broadcasting: 1\nI1005 09:44:35.231634 194 log.go:181] (0x30ae1c0) Reply frame received for 1\nI1005 09:44:35.232478 194 log.go:181] (0x30ae1c0) (0x30ae3f0) Create stream\nI1005 09:44:35.232580 194 log.go:181] (0x30ae1c0) (0x30ae3f0) Stream added, broadcasting: 3\nI1005 09:44:35.234571 194 log.go:181] (0x30ae1c0) Reply frame received for 3\nI1005 09:44:35.234986 194 log.go:181] (0x30ae1c0) (0x30ae690) Create stream\nI1005 09:44:35.235087 194 log.go:181] (0x30ae1c0) (0x30ae690) Stream added, broadcasting: 5\nI1005 09:44:35.236918 194 log.go:181] (0x30ae1c0) Reply frame received for 5\nI1005 09:44:35.311771 194 log.go:181] (0x30ae1c0) Data frame received for 3\nI1005 09:44:35.312172 194 log.go:181] (0x30ae1c0) Data frame received for 5\nI1005 09:44:35.312349 194 log.go:181] (0x30ae3f0) (3) Data frame handling\nI1005 09:44:35.312551 194 log.go:181] (0x30ae690) (5) Data frame handling\nI1005 09:44:35.313077 194 log.go:181] (0x30ae1c0) Data frame received for 1\nI1005 09:44:35.313354 194 log.go:181] (0x30ae230) (1) Data frame handling\nI1005 09:44:35.314250 194 log.go:181] (0x30ae690) (5) Data frame sent\nI1005 09:44:35.314507 194 log.go:181] (0x30ae230) (1) Data frame sent\nI1005 09:44:35.315002 194 log.go:181] (0x30ae3f0) (3) Data frame sent\nI1005 09:44:35.315224 194 log.go:181] (0x30ae1c0) Data frame received for 3\nI1005 09:44:35.315364 194 log.go:181] (0x30ae3f0) (3) Data frame handling\nI1005 09:44:35.315495 194 log.go:181] (0x30ae1c0) Data frame received for 5\nI1005 09:44:35.316270 194 log.go:181] (0x30ae1c0) (0x30ae230) Stream removed, broadcasting: 1\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1005 09:44:35.317959 194 log.go:181] (0x30ae690) (5) Data frame handling\nI1005 09:44:35.318841 194 log.go:181] (0x30ae1c0) Go away received\nI1005 09:44:35.323776 194 log.go:181] (0x30ae1c0) (0x30ae230) Stream removed, broadcasting: 1\nI1005 09:44:35.324130 194 log.go:181] (0x30ae1c0) (0x30ae3f0) Stream removed, broadcasting: 3\nI1005 09:44:35.324382 194 log.go:181] (0x30ae1c0) (0x30ae690) Stream removed, broadcasting: 5\n" Oct 5 09:44:35.334: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 09:44:35.334: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 09:44:35.342: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 09:44:35.342: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 09:44:35.342: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Oct 5 09:44:35.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6001 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 09:44:36.836: INFO: stderr: "I1005 09:44:36.697374 214 log.go:181] (0x2eb5260) (0x2eb5420) Create stream\nI1005 09:44:36.699898 214 log.go:181] (0x2eb5260) (0x2eb5420) Stream added, broadcasting: 1\nI1005 09:44:36.709637 214 log.go:181] (0x2eb5260) Reply frame received for 1\nI1005 09:44:36.710037 214 log.go:181] (0x2eb5260) (0x2eb55e0) Create stream\nI1005 09:44:36.710106 214 log.go:181] (0x2eb5260) (0x2eb55e0) Stream added, broadcasting: 3\nI1005 09:44:36.711279 214 log.go:181] (0x2eb5260) Reply frame received for 3\nI1005 09:44:36.711456 214 log.go:181] (0x2eb5260) (0x2c26070) Create stream\nI1005 09:44:36.711512 214 log.go:181] (0x2eb5260) (0x2c26070) Stream added, broadcasting: 5\nI1005 09:44:36.712544 214 log.go:181] (0x2eb5260) Reply frame received for 5\nI1005 09:44:36.818042 214 log.go:181] (0x2eb5260) Data frame received for 3\nI1005 09:44:36.818320 214 log.go:181] (0x2eb5260) Data frame received for 5\nI1005 09:44:36.818457 214 log.go:181] (0x2c26070) (5) Data frame handling\nI1005 09:44:36.818882 214 log.go:181] (0x2eb55e0) (3) Data frame handling\nI1005 09:44:36.819155 214 log.go:181] (0x2c26070) (5) Data frame sent\nI1005 09:44:36.819397 214 log.go:181] (0x2eb55e0) (3) Data frame sent\nI1005 09:44:36.819785 214 log.go:181] (0x2eb5260) Data frame received for 1\nI1005 09:44:36.819995 214 log.go:181] (0x2eb5420) (1) Data frame handling\nI1005 09:44:36.820143 214 log.go:181] (0x2eb5260) Data frame received for 3\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 09:44:36.821369 214 log.go:181] (0x2eb55e0) (3) Data frame handling\nI1005 09:44:36.821608 214 log.go:181] (0x2eb5260) Data frame received for 5\nI1005 09:44:36.821733 214 log.go:181] (0x2c26070) (5) Data frame handling\nI1005 09:44:36.821841 214 log.go:181] (0x2eb5420) (1) Data frame sent\nI1005 09:44:36.823574 214 log.go:181] (0x2eb5260) (0x2eb5420) Stream removed, broadcasting: 1\nI1005 09:44:36.825465 214 log.go:181] (0x2eb5260) Go away received\nI1005 09:44:36.827606 214 log.go:181] (0x2eb5260) (0x2eb5420) Stream removed, broadcasting: 1\nI1005 09:44:36.827957 214 log.go:181] (0x2eb5260) (0x2eb55e0) Stream removed, broadcasting: 3\nI1005 09:44:36.828180 214 log.go:181] (0x2eb5260) (0x2c26070) Stream removed, broadcasting: 5\n" Oct 5 09:44:36.837: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 09:44:36.837: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 09:44:36.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6001 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 09:44:38.361: INFO: stderr: "I1005 09:44:38.211243 234 log.go:181] (0x25341c0) (0x2534230) Create stream\nI1005 09:44:38.214855 234 log.go:181] (0x25341c0) (0x2534230) Stream added, broadcasting: 1\nI1005 09:44:38.228405 234 log.go:181] (0x25341c0) Reply frame received for 1\nI1005 09:44:38.229524 234 log.go:181] (0x25341c0) (0x28d8310) Create stream\nI1005 09:44:38.229625 234 log.go:181] (0x25341c0) (0x28d8310) Stream added, broadcasting: 3\nI1005 09:44:38.231453 234 log.go:181] (0x25341c0) Reply frame received for 3\nI1005 09:44:38.231693 234 log.go:181] (0x25341c0) (0x28d84d0) Create stream\nI1005 09:44:38.231754 234 log.go:181] (0x25341c0) (0x28d84d0) Stream added, broadcasting: 5\nI1005 09:44:38.233100 234 log.go:181] (0x25341c0) Reply frame received for 5\nI1005 09:44:38.313173 234 log.go:181] (0x25341c0) Data frame received for 5\nI1005 09:44:38.313514 234 log.go:181] (0x28d84d0) (5) Data frame handling\nI1005 09:44:38.314127 234 log.go:181] (0x28d84d0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 09:44:38.343459 234 log.go:181] (0x25341c0) Data frame received for 5\nI1005 09:44:38.343735 234 log.go:181] (0x28d84d0) (5) Data frame handling\nI1005 09:44:38.344063 234 log.go:181] (0x25341c0) Data frame received for 3\nI1005 09:44:38.344236 234 log.go:181] (0x28d8310) (3) Data frame handling\nI1005 09:44:38.344399 234 log.go:181] (0x28d8310) (3) Data frame sent\nI1005 09:44:38.344608 234 log.go:181] (0x25341c0) Data frame received for 3\nI1005 09:44:38.344761 234 log.go:181] (0x28d8310) (3) Data frame handling\nI1005 09:44:38.345000 234 log.go:181] (0x25341c0) Data frame received for 1\nI1005 09:44:38.345149 234 log.go:181] (0x2534230) (1) Data frame handling\nI1005 09:44:38.345303 234 log.go:181] (0x2534230) (1) Data frame sent\nI1005 09:44:38.346588 234 log.go:181] (0x25341c0) (0x2534230) Stream removed, broadcasting: 1\nI1005 09:44:38.350073 234 log.go:181] (0x25341c0) Go away received\nI1005 09:44:38.351763 234 log.go:181] (0x25341c0) (0x2534230) Stream removed, broadcasting: 1\nI1005 09:44:38.352163 234 log.go:181] (0x25341c0) (0x28d8310) Stream removed, broadcasting: 3\nI1005 09:44:38.352304 234 log.go:181] (0x25341c0) (0x28d84d0) Stream removed, broadcasting: 5\n" Oct 5 09:44:38.362: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 09:44:38.362: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 09:44:38.363: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6001 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 09:44:39.934: INFO: stderr: "I1005 09:44:39.755760 254 log.go:181] (0x2d660e0) (0x2d66150) Create stream\nI1005 09:44:39.758798 254 log.go:181] (0x2d660e0) (0x2d66150) Stream added, broadcasting: 1\nI1005 09:44:39.779423 254 log.go:181] (0x2d660e0) Reply frame received for 1\nI1005 09:44:39.779893 254 log.go:181] (0x2d660e0) (0x2de8070) Create stream\nI1005 09:44:39.779970 254 log.go:181] (0x2d660e0) (0x2de8070) Stream added, broadcasting: 3\nI1005 09:44:39.781576 254 log.go:181] (0x2d660e0) Reply frame received for 3\nI1005 09:44:39.781963 254 log.go:181] (0x2d660e0) (0x2d661c0) Create stream\nI1005 09:44:39.782074 254 log.go:181] (0x2d660e0) (0x2d661c0) Stream added, broadcasting: 5\nI1005 09:44:39.783252 254 log.go:181] (0x2d660e0) Reply frame received for 5\nI1005 09:44:39.879426 254 log.go:181] (0x2d660e0) Data frame received for 5\nI1005 09:44:39.879898 254 log.go:181] (0x2d661c0) (5) Data frame handling\nI1005 09:44:39.880697 254 log.go:181] (0x2d661c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 09:44:39.917835 254 log.go:181] (0x2d660e0) Data frame received for 3\nI1005 09:44:39.918033 254 log.go:181] (0x2de8070) (3) Data frame handling\nI1005 09:44:39.918188 254 log.go:181] (0x2d660e0) Data frame received for 5\nI1005 09:44:39.918347 254 log.go:181] (0x2d661c0) (5) Data frame handling\nI1005 09:44:39.918567 254 log.go:181] (0x2de8070) (3) Data frame sent\nI1005 09:44:39.918719 254 log.go:181] (0x2d660e0) Data frame received for 3\nI1005 09:44:39.918847 254 log.go:181] (0x2de8070) (3) Data frame handling\nI1005 09:44:39.919436 254 log.go:181] (0x2d660e0) Data frame received for 1\nI1005 09:44:39.919564 254 log.go:181] (0x2d66150) (1) Data frame handling\nI1005 09:44:39.919679 254 log.go:181] (0x2d66150) (1) Data frame sent\nI1005 09:44:39.920170 254 log.go:181] (0x2d660e0) (0x2d66150) Stream removed, broadcasting: 1\nI1005 09:44:39.922667 254 log.go:181] (0x2d660e0) Go away received\nI1005 09:44:39.924439 254 log.go:181] (0x2d660e0) (0x2d66150) Stream removed, broadcasting: 1\nI1005 09:44:39.924716 254 log.go:181] (0x2d660e0) (0x2de8070) Stream removed, broadcasting: 3\nI1005 09:44:39.924961 254 log.go:181] (0x2d660e0) (0x2d661c0) Stream removed, broadcasting: 5\n" Oct 5 09:44:39.935: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 09:44:39.935: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 09:44:39.935: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 09:44:39.951: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Oct 5 09:44:49.965: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 5 09:44:49.965: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 5 09:44:49.966: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 5 09:44:50.030: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:50.030: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:50.030: INFO: ss-1 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:50.031: INFO: ss-2 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:50.031: INFO: Oct 5 09:44:50.031: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 09:44:51.138: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:51.139: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:51.139: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:51.140: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:51.140: INFO: Oct 5 09:44:51.140: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 09:44:52.423: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:52.423: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:52.423: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:52.423: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:52.424: INFO: Oct 5 09:44:52.424: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 09:44:53.435: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:53.436: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:53.436: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:53.436: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:53.437: INFO: Oct 5 09:44:53.437: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 09:44:54.448: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:54.448: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:54.448: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:54.449: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:54.449: INFO: Oct 5 09:44:54.449: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 09:44:55.475: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:55.475: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:55.476: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:55.476: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:55.477: INFO: Oct 5 09:44:55.477: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 09:44:56.489: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:56.489: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:56.490: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:56.490: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:56.491: INFO: Oct 5 09:44:56.491: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 09:44:57.512: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:57.512: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:57.513: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:57.513: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:57.513: INFO: Oct 5 09:44:57.513: INFO: StatefulSet ss has not reached scale 0, at 3 Oct 5 09:44:58.522: INFO: POD NODE PHASE GRACE CONDITIONS Oct 5 09:44:58.522: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:37 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:43:58 +0000 UTC }] Oct 5 09:44:58.522: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-10-05 09:44:20 +0000 UTC }] Oct 5 09:44:58.522: INFO: Oct 5 09:44:58.522: INFO: StatefulSet ss has not reached scale 0, at 2 Oct 5 09:44:59.576: INFO: Verifying statefulset ss doesn't scale past 0 for another 474.580429ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6001 Oct 5 09:45:00.584: INFO: Scaling statefulset ss to 0 Oct 5 09:45:00.601: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 09:45:00.606: INFO: Deleting all statefulset in ns statefulset-6001 Oct 5 09:45:00.613: INFO: Scaling statefulset ss to 0 Oct 5 09:45:00.627: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 09:45:00.632: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:45:00.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6001" for this suite. • [SLOW TEST:62.217 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":303,"completed":16,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:45:00.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:45:00.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3131" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":303,"completed":17,"skipped":334,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:45:00.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 5 09:45:01.016: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:01.031: INFO: Number of nodes with available pods: 0 Oct 5 09:45:01.032: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:02.166: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:02.305: INFO: Number of nodes with available pods: 0 Oct 5 09:45:02.305: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:03.046: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:03.280: INFO: Number of nodes with available pods: 0 Oct 5 09:45:03.280: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:04.213: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:04.399: INFO: Number of nodes with available pods: 0 Oct 5 09:45:04.399: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:05.073: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:05.081: INFO: Number of nodes with available pods: 2 Oct 5 09:45:05.082: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Oct 5 09:45:05.133: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:05.138: INFO: Number of nodes with available pods: 1 Oct 5 09:45:05.138: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:06.163: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:06.169: INFO: Number of nodes with available pods: 1 Oct 5 09:45:06.169: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:07.149: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:07.155: INFO: Number of nodes with available pods: 1 Oct 5 09:45:07.155: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:08.151: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:08.157: INFO: Number of nodes with available pods: 1 Oct 5 09:45:08.157: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:09.151: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:09.157: INFO: Number of nodes with available pods: 1 Oct 5 09:45:09.157: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:10.152: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:10.160: INFO: Number of nodes with available pods: 1 Oct 5 09:45:10.160: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:11.149: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:11.155: INFO: Number of nodes with available pods: 1 Oct 5 09:45:11.155: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:12.150: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:12.155: INFO: Number of nodes with available pods: 1 Oct 5 09:45:12.156: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:13.149: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:13.157: INFO: Number of nodes with available pods: 1 Oct 5 09:45:13.157: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:14.151: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:14.158: INFO: Number of nodes with available pods: 1 Oct 5 09:45:14.158: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:15.149: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:15.156: INFO: Number of nodes with available pods: 1 Oct 5 09:45:15.156: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:16.150: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:16.155: INFO: Number of nodes with available pods: 1 Oct 5 09:45:16.156: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:17.150: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:17.157: INFO: Number of nodes with available pods: 1 Oct 5 09:45:17.157: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:18.151: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:18.159: INFO: Number of nodes with available pods: 1 Oct 5 09:45:18.159: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:19.150: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:19.158: INFO: Number of nodes with available pods: 1 Oct 5 09:45:19.158: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:20.150: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:20.156: INFO: Number of nodes with available pods: 1 Oct 5 09:45:20.156: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:21.182: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:21.190: INFO: Number of nodes with available pods: 1 Oct 5 09:45:21.190: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:45:22.148: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:45:22.153: INFO: Number of nodes with available pods: 2 Oct 5 09:45:22.153: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-893, will wait for the garbage collector to delete the pods Oct 5 09:45:22.224: INFO: Deleting DaemonSet.extensions daemon-set took: 9.378813ms Oct 5 09:45:22.326: INFO: Terminating DaemonSet.extensions daemon-set pods took: 102.241656ms Oct 5 09:45:28.732: INFO: Number of nodes with available pods: 0 Oct 5 09:45:28.733: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 09:45:28.754: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-893/daemonsets","resourceVersion":"3149416"},"items":null} Oct 5 09:45:28.759: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-893/pods","resourceVersion":"3149416"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:45:28.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-893" for this suite. • [SLOW TEST:27.930 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":303,"completed":18,"skipped":335,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:45:28.799: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5406 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5406 I1005 09:45:29.132806 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5406, replica count: 2 I1005 09:45:32.185489 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 09:45:35.186988 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 09:45:35.187: INFO: Creating new exec pod Oct 5 09:45:40.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5406 execpodwvctj -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 5 09:45:41.708: INFO: stderr: "I1005 09:45:41.523875 275 log.go:181] (0x31a8000) (0x31a8070) Create stream\nI1005 09:45:41.525724 275 log.go:181] (0x31a8000) (0x31a8070) Stream added, broadcasting: 1\nI1005 09:45:41.542040 275 log.go:181] (0x31a8000) Reply frame received for 1\nI1005 09:45:41.543643 275 log.go:181] (0x31a8000) (0x2de8000) Create stream\nI1005 09:45:41.543781 275 log.go:181] (0x31a8000) (0x2de8000) Stream added, broadcasting: 3\nI1005 09:45:41.546453 275 log.go:181] (0x31a8000) Reply frame received for 3\nI1005 09:45:41.546830 275 log.go:181] (0x31a8000) (0x2f3e0e0) Create stream\nI1005 09:45:41.546902 275 log.go:181] (0x31a8000) (0x2f3e0e0) Stream added, broadcasting: 5\nI1005 09:45:41.547953 275 log.go:181] (0x31a8000) Reply frame received for 5\nI1005 09:45:41.666944 275 log.go:181] (0x31a8000) Data frame received for 5\nI1005 09:45:41.667202 275 log.go:181] (0x2f3e0e0) (5) Data frame handling\nI1005 09:45:41.667752 275 log.go:181] (0x2f3e0e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1005 09:45:41.694985 275 log.go:181] (0x31a8000) Data frame received for 3\nI1005 09:45:41.695107 275 log.go:181] (0x2de8000) (3) Data frame handling\nI1005 09:45:41.695203 275 log.go:181] (0x31a8000) Data frame received for 5\nI1005 09:45:41.695322 275 log.go:181] (0x2f3e0e0) (5) Data frame handling\nI1005 09:45:41.695422 275 log.go:181] (0x2f3e0e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1005 09:45:41.695862 275 log.go:181] (0x31a8000) Data frame received for 5\nI1005 09:45:41.695987 275 log.go:181] (0x2f3e0e0) (5) Data frame handling\nI1005 09:45:41.696972 275 log.go:181] (0x31a8000) Data frame received for 1\nI1005 09:45:41.697129 275 log.go:181] (0x31a8070) (1) Data frame handling\nI1005 09:45:41.697260 275 log.go:181] (0x31a8070) (1) Data frame sent\nI1005 09:45:41.698236 275 log.go:181] (0x31a8000) (0x31a8070) Stream removed, broadcasting: 1\nI1005 09:45:41.700692 275 log.go:181] (0x31a8000) Go away received\nI1005 09:45:41.702145 275 log.go:181] (0x31a8000) (0x31a8070) Stream removed, broadcasting: 1\nI1005 09:45:41.702283 275 log.go:181] (0x31a8000) (0x2de8000) Stream removed, broadcasting: 3\nI1005 09:45:41.702392 275 log.go:181] (0x31a8000) (0x2f3e0e0) Stream removed, broadcasting: 5\n" Oct 5 09:45:41.709: INFO: stdout: "" Oct 5 09:45:41.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5406 execpodwvctj -- /bin/sh -x -c nc -zv -t -w 2 10.101.18.116 80' Oct 5 09:45:43.249: INFO: stderr: "I1005 09:45:43.105831 295 log.go:181] (0x2f60000) (0x2f60070) Create stream\nI1005 09:45:43.109579 295 log.go:181] (0x2f60000) (0x2f60070) Stream added, broadcasting: 1\nI1005 09:45:43.125426 295 log.go:181] (0x2f60000) Reply frame received for 1\nI1005 09:45:43.125901 295 log.go:181] (0x2f60000) (0x25d6070) Create stream\nI1005 09:45:43.125970 295 log.go:181] (0x2f60000) (0x25d6070) Stream added, broadcasting: 3\nI1005 09:45:43.127341 295 log.go:181] (0x2f60000) Reply frame received for 3\nI1005 09:45:43.127565 295 log.go:181] (0x2f60000) (0x2b2c1c0) Create stream\nI1005 09:45:43.127628 295 log.go:181] (0x2f60000) (0x2b2c1c0) Stream added, broadcasting: 5\nI1005 09:45:43.128728 295 log.go:181] (0x2f60000) Reply frame received for 5\nI1005 09:45:43.229956 295 log.go:181] (0x2f60000) Data frame received for 5\nI1005 09:45:43.230364 295 log.go:181] (0x2f60000) Data frame received for 3\nI1005 09:45:43.230592 295 log.go:181] (0x2b2c1c0) (5) Data frame handling\nI1005 09:45:43.230906 295 log.go:181] (0x25d6070) (3) Data frame handling\nI1005 09:45:43.231235 295 log.go:181] (0x2f60000) Data frame received for 1\nI1005 09:45:43.231352 295 log.go:181] (0x2f60070) (1) Data frame handling\nI1005 09:45:43.231829 295 log.go:181] (0x2f60070) (1) Data frame sent\nI1005 09:45:43.232124 295 log.go:181] (0x2b2c1c0) (5) Data frame sent\nI1005 09:45:43.232260 295 log.go:181] (0x2f60000) Data frame received for 5\nI1005 09:45:43.232357 295 log.go:181] (0x2b2c1c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.101.18.116 80\nConnection to 10.101.18.116 80 port [tcp/http] succeeded!\nI1005 09:45:43.235545 295 log.go:181] (0x2f60000) (0x2f60070) Stream removed, broadcasting: 1\nI1005 09:45:43.236240 295 log.go:181] (0x2f60000) Go away received\nI1005 09:45:43.239576 295 log.go:181] (0x2f60000) (0x2f60070) Stream removed, broadcasting: 1\nI1005 09:45:43.239888 295 log.go:181] (0x2f60000) (0x25d6070) Stream removed, broadcasting: 3\nI1005 09:45:43.240142 295 log.go:181] (0x2f60000) (0x2b2c1c0) Stream removed, broadcasting: 5\n" Oct 5 09:45:43.250: INFO: stdout: "" Oct 5 09:45:43.250: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:45:43.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5406" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:14.546 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":303,"completed":19,"skipped":351,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:45:43.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 09:45:43.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e6380f7b-7335-45e2-8d83-3d16307fb98a" in namespace "projected-5624" to be "Succeeded or Failed" Oct 5 09:45:43.470: INFO: Pod "downwardapi-volume-e6380f7b-7335-45e2-8d83-3d16307fb98a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.647114ms Oct 5 09:45:45.571: INFO: Pod "downwardapi-volume-e6380f7b-7335-45e2-8d83-3d16307fb98a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136238874s Oct 5 09:45:47.644: INFO: Pod "downwardapi-volume-e6380f7b-7335-45e2-8d83-3d16307fb98a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.209390287s STEP: Saw pod success Oct 5 09:45:47.644: INFO: Pod "downwardapi-volume-e6380f7b-7335-45e2-8d83-3d16307fb98a" satisfied condition "Succeeded or Failed" Oct 5 09:45:47.650: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-e6380f7b-7335-45e2-8d83-3d16307fb98a container client-container: STEP: delete the pod Oct 5 09:45:47.714: INFO: Waiting for pod downwardapi-volume-e6380f7b-7335-45e2-8d83-3d16307fb98a to disappear Oct 5 09:45:47.740: INFO: Pod downwardapi-volume-e6380f7b-7335-45e2-8d83-3d16307fb98a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:45:47.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5624" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":303,"completed":20,"skipped":356,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:45:47.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2307 Oct 5 09:45:52.169: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2307 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 5 09:45:53.802: INFO: stderr: "I1005 09:45:53.544649 315 log.go:181] (0x2512150) (0x25128c0) Create stream\nI1005 09:45:53.548050 315 log.go:181] (0x2512150) (0x25128c0) Stream added, broadcasting: 1\nI1005 09:45:53.564266 315 log.go:181] (0x2512150) Reply frame received for 1\nI1005 09:45:53.565192 315 log.go:181] (0x2512150) (0x2513570) Create stream\nI1005 09:45:53.565297 315 log.go:181] (0x2512150) (0x2513570) Stream added, broadcasting: 3\nI1005 09:45:53.567454 315 log.go:181] (0x2512150) Reply frame received for 3\nI1005 09:45:53.567811 315 log.go:181] (0x2512150) (0x2dae070) Create stream\nI1005 09:45:53.567903 315 log.go:181] (0x2512150) (0x2dae070) Stream added, broadcasting: 5\nI1005 09:45:53.569640 315 log.go:181] (0x2512150) Reply frame received for 5\nI1005 09:45:53.677191 315 log.go:181] (0x2512150) Data frame received for 5\nI1005 09:45:53.677515 315 log.go:181] (0x2dae070) (5) Data frame handling\nI1005 09:45:53.678145 315 log.go:181] (0x2dae070) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1005 09:45:53.782715 315 log.go:181] (0x2512150) Data frame received for 3\nI1005 09:45:53.782927 315 log.go:181] (0x2513570) (3) Data frame handling\nI1005 09:45:53.783149 315 log.go:181] (0x2513570) (3) Data frame sent\nI1005 09:45:53.783497 315 log.go:181] (0x2512150) Data frame received for 3\nI1005 09:45:53.783780 315 log.go:181] (0x2512150) Data frame received for 5\nI1005 09:45:53.783952 315 log.go:181] (0x2dae070) (5) Data frame handling\nI1005 09:45:53.784098 315 log.go:181] (0x2513570) (3) Data frame handling\nI1005 09:45:53.785401 315 log.go:181] (0x2512150) Data frame received for 1\nI1005 09:45:53.785562 315 log.go:181] (0x25128c0) (1) Data frame handling\nI1005 09:45:53.785738 315 log.go:181] (0x25128c0) (1) Data frame sent\nI1005 09:45:53.786894 315 log.go:181] (0x2512150) (0x25128c0) Stream removed, broadcasting: 1\nI1005 09:45:53.790220 315 log.go:181] (0x2512150) Go away received\nI1005 09:45:53.793783 315 log.go:181] (0x2512150) (0x25128c0) Stream removed, broadcasting: 1\nI1005 09:45:53.794237 315 log.go:181] (0x2512150) (0x2513570) Stream removed, broadcasting: 3\nI1005 09:45:53.794406 315 log.go:181] (0x2512150) (0x2dae070) Stream removed, broadcasting: 5\n" Oct 5 09:45:53.802: INFO: stdout: "iptables" Oct 5 09:45:53.802: INFO: proxyMode: iptables Oct 5 09:45:53.815: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 09:45:53.822: INFO: Pod kube-proxy-mode-detector still exists Oct 5 09:45:55.823: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 09:45:55.829: INFO: Pod kube-proxy-mode-detector still exists Oct 5 09:45:57.822: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 09:45:57.829: INFO: Pod kube-proxy-mode-detector still exists Oct 5 09:45:59.822: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 09:45:59.828: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-2307 STEP: creating replication controller affinity-clusterip-timeout in namespace services-2307 I1005 09:45:59.881755 10 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2307, replica count: 3 I1005 09:46:02.933328 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 09:46:05.934095 10 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 09:46:05.946: INFO: Creating new exec pod Oct 5 09:46:11.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2307 execpod-affinitycvskd -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Oct 5 09:46:12.494: INFO: stderr: "I1005 09:46:12.399729 335 log.go:181] (0x294e0e0) (0x294e150) Create stream\nI1005 09:46:12.401901 335 log.go:181] (0x294e0e0) (0x294e150) Stream added, broadcasting: 1\nI1005 09:46:12.413539 335 log.go:181] (0x294e0e0) Reply frame received for 1\nI1005 09:46:12.414370 335 log.go:181] (0x294e0e0) (0x294e310) Create stream\nI1005 09:46:12.414492 335 log.go:181] (0x294e0e0) (0x294e310) Stream added, broadcasting: 3\nI1005 09:46:12.416314 335 log.go:181] (0x294e0e0) Reply frame received for 3\nI1005 09:46:12.416534 335 log.go:181] (0x294e0e0) (0x2b30e00) Create stream\nI1005 09:46:12.416622 335 log.go:181] (0x294e0e0) (0x2b30e00) Stream added, broadcasting: 5\nI1005 09:46:12.418189 335 log.go:181] (0x294e0e0) Reply frame received for 5\nI1005 09:46:12.479261 335 log.go:181] (0x294e0e0) Data frame received for 3\nI1005 09:46:12.479777 335 log.go:181] (0x294e0e0) Data frame received for 5\nI1005 09:46:12.480022 335 log.go:181] (0x294e0e0) Data frame received for 1\nI1005 09:46:12.480205 335 log.go:181] (0x294e150) (1) Data frame handling\nI1005 09:46:12.480332 335 log.go:181] (0x2b30e00) (5) Data frame handling\nI1005 09:46:12.480584 335 log.go:181] (0x294e310) (3) Data frame handling\nI1005 09:46:12.482213 335 log.go:181] (0x294e150) (1) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\nI1005 09:46:12.482530 335 log.go:181] (0x2b30e00) (5) Data frame sent\nI1005 09:46:12.483820 335 log.go:181] (0x294e0e0) Data frame received for 5\nI1005 09:46:12.483965 335 log.go:181] (0x2b30e00) (5) Data frame handling\nI1005 09:46:12.484504 335 log.go:181] (0x294e0e0) (0x294e150) Stream removed, broadcasting: 1\nI1005 09:46:12.485218 335 log.go:181] (0x294e0e0) Go away received\nI1005 09:46:12.487103 335 log.go:181] (0x294e0e0) (0x294e150) Stream removed, broadcasting: 1\nI1005 09:46:12.487586 335 log.go:181] (0x294e0e0) (0x294e310) Stream removed, broadcasting: 3\nI1005 09:46:12.487732 335 log.go:181] (0x294e0e0) (0x2b30e00) Stream removed, broadcasting: 5\n" Oct 5 09:46:12.495: INFO: stdout: "" Oct 5 09:46:12.498: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2307 execpod-affinitycvskd -- /bin/sh -x -c nc -zv -t -w 2 10.98.230.188 80' Oct 5 09:46:14.068: INFO: stderr: "I1005 09:46:13.911008 355 log.go:181] (0x2eaa000) (0x2eaa070) Create stream\nI1005 09:46:13.913979 355 log.go:181] (0x2eaa000) (0x2eaa070) Stream added, broadcasting: 1\nI1005 09:46:13.949801 355 log.go:181] (0x2eaa000) Reply frame received for 1\nI1005 09:46:13.950220 355 log.go:181] (0x2eaa000) (0x310c070) Create stream\nI1005 09:46:13.950284 355 log.go:181] (0x2eaa000) (0x310c070) Stream added, broadcasting: 3\nI1005 09:46:13.951726 355 log.go:181] (0x2eaa000) Reply frame received for 3\nI1005 09:46:13.952043 355 log.go:181] (0x2eaa000) (0x2eaa150) Create stream\nI1005 09:46:13.952138 355 log.go:181] (0x2eaa000) (0x2eaa150) Stream added, broadcasting: 5\nI1005 09:46:13.953576 355 log.go:181] (0x2eaa000) Reply frame received for 5\nI1005 09:46:14.049707 355 log.go:181] (0x2eaa000) Data frame received for 5\nI1005 09:46:14.050127 355 log.go:181] (0x2eaa000) Data frame received for 1\nI1005 09:46:14.050387 355 log.go:181] (0x2eaa070) (1) Data frame handling\nI1005 09:46:14.050651 355 log.go:181] (0x2eaa000) Data frame received for 3\nI1005 09:46:14.050849 355 log.go:181] (0x310c070) (3) Data frame handling\nI1005 09:46:14.051075 355 log.go:181] (0x2eaa150) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.230.188 80\nConnection to 10.98.230.188 80 port [tcp/http] succeeded!\nI1005 09:46:14.054750 355 log.go:181] (0x2eaa070) (1) Data frame sent\nI1005 09:46:14.055193 355 log.go:181] (0x2eaa150) (5) Data frame sent\nI1005 09:46:14.055293 355 log.go:181] (0x2eaa000) Data frame received for 5\nI1005 09:46:14.055375 355 log.go:181] (0x2eaa000) (0x2eaa070) Stream removed, broadcasting: 1\nI1005 09:46:14.055673 355 log.go:181] (0x2eaa150) (5) Data frame handling\nI1005 09:46:14.055986 355 log.go:181] (0x2eaa000) Go away received\nI1005 09:46:14.058978 355 log.go:181] (0x2eaa000) (0x2eaa070) Stream removed, broadcasting: 1\nI1005 09:46:14.059249 355 log.go:181] (0x2eaa000) (0x310c070) Stream removed, broadcasting: 3\nI1005 09:46:14.059760 355 log.go:181] (0x2eaa000) (0x2eaa150) Stream removed, broadcasting: 5\n" Oct 5 09:46:14.068: INFO: stdout: "" Oct 5 09:46:14.069: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2307 execpod-affinitycvskd -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.98.230.188:80/ ; done' Oct 5 09:46:15.823: INFO: stderr: "I1005 09:46:15.517096 375 log.go:181] (0x2db0700) (0x2db0770) Create stream\nI1005 09:46:15.519224 375 log.go:181] (0x2db0700) (0x2db0770) Stream added, broadcasting: 1\nI1005 09:46:15.528325 375 log.go:181] (0x2db0700) Reply frame received for 1\nI1005 09:46:15.529077 375 log.go:181] (0x2db0700) (0x267e3f0) Create stream\nI1005 09:46:15.529175 375 log.go:181] (0x2db0700) (0x267e3f0) Stream added, broadcasting: 3\nI1005 09:46:15.531000 375 log.go:181] (0x2db0700) Reply frame received for 3\nI1005 09:46:15.531428 375 log.go:181] (0x2db0700) (0x2fba070) Create stream\nI1005 09:46:15.531527 375 log.go:181] (0x2db0700) (0x2fba070) Stream added, broadcasting: 5\nI1005 09:46:15.533117 375 log.go:181] (0x2db0700) Reply frame received for 5\nI1005 09:46:15.612969 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.613419 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.613909 375 log.go:181] (0x2fba070) (5) Data frame sent\n+ seq 0 15\nI1005 09:46:15.618064 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.618248 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.618338 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.618466 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.618571 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.618667 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.712201 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.712341 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.712462 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.712552 375 log.go:181] (0x2fba070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.712703 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.713102 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.713300 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.713464 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.713664 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.715285 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.715423 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.715602 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.715794 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.715880 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.715964 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.716056 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.716134 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.716282 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.721499 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.721634 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.721746 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.722322 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.722450 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.722564 375 log.go:181] (0x2fba070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.722686 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.722780 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.722887 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.726827 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.726912 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.726996 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.727236 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.727314 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.727403 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.727484 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.727586 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.727666 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.732565 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.732663 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.732781 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.733082 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.733270 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.733454 375 log.go:181] (0x2fba070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.733575 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.733657 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.733760 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.737278 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.737373 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.737489 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.737699 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.737805 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.737930 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.738047 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.738156 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.738254 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.742807 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.742897 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.743026 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.743304 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.743399 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.743565 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.743807 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.743947 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.744024 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.750375 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.750519 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.750662 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.751348 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.751471 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.751610 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.751762 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.751878 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.751988 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.756083 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.756204 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.756344 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.756666 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.756751 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.756978 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.757150 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.757323 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.757485 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.760021 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.760144 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.760261 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.761637 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.761752 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.761911 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.762059 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -sI1005 09:46:15.762210 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.762323 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.762413 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.762491 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.762596 375 log.go:181] (0x2fba070) (5) Data frame sent\n --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.766422 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.766551 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.766689 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.767215 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.767311 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.767404 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.767486 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.767559 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.767645 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.767715 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.767970 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.768655 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.772713 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.772958 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.773104 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.773267 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.773408 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.773551 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.773644 375 log.go:181] (0x2fba070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.773732 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.773833 375 log.go:181] (0x2fba070) (5) Data frame sent\nI1005 09:46:15.777669 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.777823 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.777950 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.778197 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.778290 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.778372 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.778448 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.778514 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.778598 375 log.go:181] (0x2fba070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.783477 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.783575 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.783673 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.784348 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.784433 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.784528 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.784612 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.784695 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.784786 375 log.go:181] (0x2fba070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.789137 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.789256 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.789386 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.789988 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.790091 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.790257 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.790409 375 log.go:181] (0x2fba070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:15.790565 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.790757 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.795943 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.796092 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.796239 375 log.go:181] (0x267e3f0) (3) Data frame sent\nI1005 09:46:15.796589 375 log.go:181] (0x2db0700) Data frame received for 5\nI1005 09:46:15.796713 375 log.go:181] (0x2fba070) (5) Data frame handling\nI1005 09:46:15.797076 375 log.go:181] (0x2db0700) Data frame received for 3\nI1005 09:46:15.797192 375 log.go:181] (0x267e3f0) (3) Data frame handling\nI1005 09:46:15.798378 375 log.go:181] (0x2db0700) Data frame received for 1\nI1005 09:46:15.798478 375 log.go:181] (0x2db0770) (1) Data frame handling\nI1005 09:46:15.798574 375 log.go:181] (0x2db0770) (1) Data frame sent\nI1005 09:46:15.799309 375 log.go:181] (0x2db0700) (0x2db0770) Stream removed, broadcasting: 1\nI1005 09:46:15.808673 375 log.go:181] (0x2db0700) (0x2db0770) Stream removed, broadcasting: 1\nI1005 09:46:15.809043 375 log.go:181] (0x2db0700) (0x267e3f0) Stream removed, broadcasting: 3\nI1005 09:46:15.814770 375 log.go:181] (0x2db0700) Go away received\nI1005 09:46:15.815614 375 log.go:181] (0x2db0700) (0x2fba070) Stream removed, broadcasting: 5\n" Oct 5 09:46:15.826: INFO: stdout: "\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km\naffinity-clusterip-timeout-ps4km" Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.827: INFO: Received response from host: affinity-clusterip-timeout-ps4km Oct 5 09:46:15.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2307 execpod-affinitycvskd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.98.230.188:80/' Oct 5 09:46:17.371: INFO: stderr: "I1005 09:46:17.247383 395 log.go:181] (0x24e8700) (0x24e8770) Create stream\nI1005 09:46:17.249103 395 log.go:181] (0x24e8700) (0x24e8770) Stream added, broadcasting: 1\nI1005 09:46:17.258761 395 log.go:181] (0x24e8700) Reply frame received for 1\nI1005 09:46:17.259413 395 log.go:181] (0x24e8700) (0x247ad20) Create stream\nI1005 09:46:17.259503 395 log.go:181] (0x24e8700) (0x247ad20) Stream added, broadcasting: 3\nI1005 09:46:17.262204 395 log.go:181] (0x24e8700) Reply frame received for 3\nI1005 09:46:17.262671 395 log.go:181] (0x24e8700) (0x247b1f0) Create stream\nI1005 09:46:17.262838 395 log.go:181] (0x24e8700) (0x247b1f0) Stream added, broadcasting: 5\nI1005 09:46:17.267901 395 log.go:181] (0x24e8700) Reply frame received for 5\nI1005 09:46:17.351876 395 log.go:181] (0x24e8700) Data frame received for 5\nI1005 09:46:17.352280 395 log.go:181] (0x247b1f0) (5) Data frame handling\nI1005 09:46:17.353060 395 log.go:181] (0x247b1f0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:17.356371 395 log.go:181] (0x24e8700) Data frame received for 3\nI1005 09:46:17.356539 395 log.go:181] (0x247ad20) (3) Data frame handling\nI1005 09:46:17.356712 395 log.go:181] (0x247ad20) (3) Data frame sent\nI1005 09:46:17.357277 395 log.go:181] (0x24e8700) Data frame received for 5\nI1005 09:46:17.357419 395 log.go:181] (0x247b1f0) (5) Data frame handling\nI1005 09:46:17.357686 395 log.go:181] (0x24e8700) Data frame received for 3\nI1005 09:46:17.357808 395 log.go:181] (0x247ad20) (3) Data frame handling\nI1005 09:46:17.359250 395 log.go:181] (0x24e8700) Data frame received for 1\nI1005 09:46:17.359328 395 log.go:181] (0x24e8770) (1) Data frame handling\nI1005 09:46:17.359417 395 log.go:181] (0x24e8770) (1) Data frame sent\nI1005 09:46:17.360552 395 log.go:181] (0x24e8700) (0x24e8770) Stream removed, broadcasting: 1\nI1005 09:46:17.361806 395 log.go:181] (0x24e8700) Go away received\nI1005 09:46:17.364145 395 log.go:181] (0x24e8700) (0x24e8770) Stream removed, broadcasting: 1\nI1005 09:46:17.364336 395 log.go:181] (0x24e8700) (0x247ad20) Stream removed, broadcasting: 3\nI1005 09:46:17.364455 395 log.go:181] (0x24e8700) (0x247b1f0) Stream removed, broadcasting: 5\n" Oct 5 09:46:17.372: INFO: stdout: "affinity-clusterip-timeout-ps4km" Oct 5 09:46:32.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2307 execpod-affinitycvskd -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.98.230.188:80/' Oct 5 09:46:33.891: INFO: stderr: "I1005 09:46:33.778504 415 log.go:181] (0x2da4000) (0x2da4070) Create stream\nI1005 09:46:33.780276 415 log.go:181] (0x2da4000) (0x2da4070) Stream added, broadcasting: 1\nI1005 09:46:33.787463 415 log.go:181] (0x2da4000) Reply frame received for 1\nI1005 09:46:33.787856 415 log.go:181] (0x2da4000) (0x2da4310) Create stream\nI1005 09:46:33.787912 415 log.go:181] (0x2da4000) (0x2da4310) Stream added, broadcasting: 3\nI1005 09:46:33.789188 415 log.go:181] (0x2da4000) Reply frame received for 3\nI1005 09:46:33.789511 415 log.go:181] (0x2da4000) (0x2da44d0) Create stream\nI1005 09:46:33.789598 415 log.go:181] (0x2da4000) (0x2da44d0) Stream added, broadcasting: 5\nI1005 09:46:33.791017 415 log.go:181] (0x2da4000) Reply frame received for 5\nI1005 09:46:33.873120 415 log.go:181] (0x2da4000) Data frame received for 5\nI1005 09:46:33.873340 415 log.go:181] (0x2da44d0) (5) Data frame handling\nI1005 09:46:33.873758 415 log.go:181] (0x2da44d0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.98.230.188:80/\nI1005 09:46:33.875158 415 log.go:181] (0x2da4000) Data frame received for 3\nI1005 09:46:33.875329 415 log.go:181] (0x2da4310) (3) Data frame handling\nI1005 09:46:33.875423 415 log.go:181] (0x2da4000) Data frame received for 5\nI1005 09:46:33.875525 415 log.go:181] (0x2da44d0) (5) Data frame handling\nI1005 09:46:33.875612 415 log.go:181] (0x2da4310) (3) Data frame sent\nI1005 09:46:33.875750 415 log.go:181] (0x2da4000) Data frame received for 3\nI1005 09:46:33.875898 415 log.go:181] (0x2da4310) (3) Data frame handling\nI1005 09:46:33.877316 415 log.go:181] (0x2da4000) Data frame received for 1\nI1005 09:46:33.877404 415 log.go:181] (0x2da4070) (1) Data frame handling\nI1005 09:46:33.877502 415 log.go:181] (0x2da4070) (1) Data frame sent\nI1005 09:46:33.878171 415 log.go:181] (0x2da4000) (0x2da4070) Stream removed, broadcasting: 1\nI1005 09:46:33.880561 415 log.go:181] (0x2da4000) Go away received\nI1005 09:46:33.882869 415 log.go:181] (0x2da4000) (0x2da4070) Stream removed, broadcasting: 1\nI1005 09:46:33.883134 415 log.go:181] (0x2da4000) (0x2da4310) Stream removed, broadcasting: 3\nI1005 09:46:33.883318 415 log.go:181] (0x2da4000) (0x2da44d0) Stream removed, broadcasting: 5\n" Oct 5 09:46:33.893: INFO: stdout: "affinity-clusterip-timeout-sbf7h" Oct 5 09:46:33.893: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-2307, will wait for the garbage collector to delete the pods Oct 5 09:46:34.003: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 10.490793ms Oct 5 09:46:34.404: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 400.882995ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:46:48.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2307" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:60.405 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":21,"skipped":386,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:46:48.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 09:46:48.383: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e" in namespace "projected-5547" to be "Succeeded or Failed" Oct 5 09:46:48.399: INFO: Pod "downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e": Phase="Pending", Reason="", readiness=false. Elapsed: 15.909495ms Oct 5 09:46:50.409: INFO: Pod "downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02598285s Oct 5 09:46:52.545: INFO: Pod "downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e": Phase="Running", Reason="", readiness=true. Elapsed: 4.161947327s Oct 5 09:46:54.553: INFO: Pod "downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.169906124s STEP: Saw pod success Oct 5 09:46:54.553: INFO: Pod "downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e" satisfied condition "Succeeded or Failed" Oct 5 09:46:54.557: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e container client-container: STEP: delete the pod Oct 5 09:46:54.621: INFO: Waiting for pod downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e to disappear Oct 5 09:46:54.662: INFO: Pod downwardapi-volume-d42704e9-db9a-478e-a648-ecb2d649839e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:46:54.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5547" for this suite. • [SLOW TEST:6.429 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":22,"skipped":393,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:46:54.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:46:54.817: INFO: Create a RollingUpdate DaemonSet Oct 5 09:46:54.825: INFO: Check that daemon pods launch on every node of the cluster Oct 5 09:46:54.847: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:46:54.852: INFO: Number of nodes with available pods: 0 Oct 5 09:46:54.852: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:46:55.925: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:46:55.932: INFO: Number of nodes with available pods: 0 Oct 5 09:46:55.932: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:46:56.986: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:46:56.993: INFO: Number of nodes with available pods: 0 Oct 5 09:46:56.993: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:46:57.968: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:46:58.002: INFO: Number of nodes with available pods: 0 Oct 5 09:46:58.002: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:46:58.866: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:46:58.872: INFO: Number of nodes with available pods: 0 Oct 5 09:46:58.872: INFO: Node kali-worker is running more than one daemon pod Oct 5 09:46:59.865: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:46:59.872: INFO: Number of nodes with available pods: 2 Oct 5 09:46:59.872: INFO: Number of running nodes: 2, number of available pods: 2 Oct 5 09:46:59.872: INFO: Update the DaemonSet to trigger a rollout Oct 5 09:46:59.883: INFO: Updating DaemonSet daemon-set Oct 5 09:47:08.961: INFO: Roll back the DaemonSet before rollout is complete Oct 5 09:47:08.974: INFO: Updating DaemonSet daemon-set Oct 5 09:47:08.974: INFO: Make sure DaemonSet rollback is complete Oct 5 09:47:09.001: INFO: Wrong image for pod: daemon-set-6qm96. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 5 09:47:09.002: INFO: Pod daemon-set-6qm96 is not available Oct 5 09:47:09.033: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:47:10.046: INFO: Wrong image for pod: daemon-set-6qm96. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Oct 5 09:47:10.046: INFO: Pod daemon-set-6qm96 is not available Oct 5 09:47:10.054: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 09:47:11.042: INFO: Pod daemon-set-q5xwl is not available Oct 5 09:47:11.050: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1291, will wait for the garbage collector to delete the pods Oct 5 09:47:11.124: INFO: Deleting DaemonSet.extensions daemon-set took: 7.405524ms Oct 5 09:47:11.625: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.946741ms Oct 5 09:47:18.730: INFO: Number of nodes with available pods: 0 Oct 5 09:47:18.730: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 09:47:18.734: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1291/daemonsets","resourceVersion":"3150158"},"items":null} Oct 5 09:47:18.737: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1291/pods","resourceVersion":"3150158"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:47:18.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1291" for this suite. • [SLOW TEST:24.089 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":303,"completed":23,"skipped":401,"failed":0} SSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:47:18.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:47:36.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5552" for this suite. • [SLOW TEST:18.101 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":303,"completed":24,"skipped":408,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:47:36.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 5 09:47:37.044: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:49:30.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6782" for this suite. • [SLOW TEST:113.858 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":303,"completed":25,"skipped":457,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:49:30.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4660.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4660.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4660.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4660.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4660.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4660.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 09:49:36.930: INFO: DNS probes using dns-4660/dns-test-da5347ee-7a2d-4f33-9680-546aa9538c2d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:49:37.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4660" for this suite. • [SLOW TEST:6.354 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":303,"completed":26,"skipped":465,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:49:37.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 5 09:49:37.493: INFO: Waiting up to 5m0s for pod "pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25" in namespace "emptydir-5062" to be "Succeeded or Failed" Oct 5 09:49:37.519: INFO: Pod "pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25": Phase="Pending", Reason="", readiness=false. Elapsed: 25.552914ms Oct 5 09:49:39.571: INFO: Pod "pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077053687s Oct 5 09:49:41.607: INFO: Pod "pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113035692s Oct 5 09:49:43.615: INFO: Pod "pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.121646247s STEP: Saw pod success Oct 5 09:49:43.616: INFO: Pod "pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25" satisfied condition "Succeeded or Failed" Oct 5 09:49:43.622: INFO: Trying to get logs from node kali-worker2 pod pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25 container test-container: STEP: delete the pod Oct 5 09:49:43.678: INFO: Waiting for pod pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25 to disappear Oct 5 09:49:43.724: INFO: Pod pod-7d3151ec-1c65-465f-9800-3b41a3e9ec25 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:49:43.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5062" for this suite. • [SLOW TEST:6.679 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":27,"skipped":481,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:49:43.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:50:00.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6860" for this suite. STEP: Destroying namespace "nsdeletetest-2660" for this suite. Oct 5 09:50:00.158: INFO: Namespace nsdeletetest-2660 was already deleted STEP: Destroying namespace "nsdeletetest-5336" for this suite. • [SLOW TEST:16.390 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":303,"completed":28,"skipped":488,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:50:00.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting the proxy server Oct 5 09:50:00.266: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:50:01.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9991" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":303,"completed":29,"skipped":502,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:50:01.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7055.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7055.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 09:50:07.569: INFO: DNS probes using dns-test-622ca9b8-5158-458a-b696-c7d13b2a183b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7055.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7055.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 09:50:16.131: INFO: File wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:16.141: INFO: File jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:16.141: INFO: Lookups using dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 failed for: [wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local] Oct 5 09:50:21.149: INFO: File wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:21.153: INFO: File jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:21.153: INFO: Lookups using dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 failed for: [wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local] Oct 5 09:50:26.148: INFO: File wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:26.154: INFO: File jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:26.154: INFO: Lookups using dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 failed for: [wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local] Oct 5 09:50:31.149: INFO: File wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:31.154: INFO: File jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:31.154: INFO: Lookups using dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 failed for: [wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local] Oct 5 09:50:36.150: INFO: File wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:36.154: INFO: File jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local from pod dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 contains 'foo.example.com. ' instead of 'bar.example.com.' Oct 5 09:50:36.155: INFO: Lookups using dns-7055/dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 failed for: [wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local] Oct 5 09:50:41.196: INFO: DNS probes using dns-test-6842951e-cea6-49c9-8589-837bcf96f5b0 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7055.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7055.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7055.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7055.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 09:50:49.898: INFO: DNS probes using dns-test-39a3ce1b-b5bd-4f70-b9fa-e409b9d2a544 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:50:49.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7055" for this suite. • [SLOW TEST:48.584 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":303,"completed":30,"skipped":511,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:50:50.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Oct 5 09:50:54.531: INFO: &Pod{ObjectMeta:{send-events-49b1e7ac-3ec1-4225-aa1a-b7621dd10ce6 events-9591 /api/v1/namespaces/events-9591/pods/send-events-49b1e7ac-3ec1-4225-aa1a-b7621dd10ce6 052a73bd-ff30-408b-a6ca-0327572bacaa 3151483 0 2020-10-05 09:50:50 +0000 UTC map[name:foo time:448755294] map[] [] [] [{e2e.test Update v1 2020-10-05 09:50:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 09:50:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.237\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rj856,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rj856,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rj856,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 09:50:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 09:50:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 09:50:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 09:50:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.237,StartTime:2020-10-05 09:50:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 09:50:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://c4332e7d5e8139b7cda4c534742a80725425363807e1b92d5a8a0a71620bd3ce,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Oct 5 09:50:56.557: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Oct 5 09:50:58.567: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:50:58.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-9591" for this suite. • [SLOW TEST:8.602 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":303,"completed":31,"skipped":521,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:50:58.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with configMap that has name projected-configmap-test-upd-8bff45b8-c9ab-48e7-acec-5d461c1c41fc STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-8bff45b8-c9ab-48e7-acec-5d461c1c41fc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:51:04.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4601" for this suite. • [SLOW TEST:6.264 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":32,"skipped":567,"failed":0} SSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:51:04.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should provide secure master service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:51:04.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8339" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":303,"completed":33,"skipped":573,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:51:05.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 09:51:15.706: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 09:51:17.726: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488275, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488275, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488275, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488275, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 09:51:20.813: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:51:21.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2641" for this suite. STEP: Destroying namespace "webhook-2641-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.612 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":303,"completed":34,"skipped":573,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:51:22.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap configmap-2799/configmap-test-c6fa6a26-9b31-4feb-a7eb-391fd259b10b STEP: Creating a pod to test consume configMaps Oct 5 09:51:23.402: INFO: Waiting up to 5m0s for pod "pod-configmaps-de2fe4a3-50d3-4a0b-b663-ee462c25d8d5" in namespace "configmap-2799" to be "Succeeded or Failed" Oct 5 09:51:23.416: INFO: Pod "pod-configmaps-de2fe4a3-50d3-4a0b-b663-ee462c25d8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.844896ms Oct 5 09:51:25.424: INFO: Pod "pod-configmaps-de2fe4a3-50d3-4a0b-b663-ee462c25d8d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021168741s Oct 5 09:51:27.430: INFO: Pod "pod-configmaps-de2fe4a3-50d3-4a0b-b663-ee462c25d8d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028036046s STEP: Saw pod success Oct 5 09:51:27.431: INFO: Pod "pod-configmaps-de2fe4a3-50d3-4a0b-b663-ee462c25d8d5" satisfied condition "Succeeded or Failed" Oct 5 09:51:27.435: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-de2fe4a3-50d3-4a0b-b663-ee462c25d8d5 container env-test: STEP: delete the pod Oct 5 09:51:27.468: INFO: Waiting for pod pod-configmaps-de2fe4a3-50d3-4a0b-b663-ee462c25d8d5 to disappear Oct 5 09:51:27.482: INFO: Pod pod-configmaps-de2fe4a3-50d3-4a0b-b663-ee462c25d8d5 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:51:27.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2799" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":35,"skipped":589,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:51:27.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:51:27.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-5244" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":303,"completed":36,"skipped":593,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:51:27.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:51:33.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-49" for this suite. • [SLOW TEST:6.270 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not conflict [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":303,"completed":37,"skipped":654,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:51:33.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:51:34.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Oct 5 09:51:35.106: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T09:51:35Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T09:51:35Z]] name:name1 resourceVersion:3151940 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0bb9464a-1a39-4e9d-a132-b50f60f1436d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Oct 5 09:51:45.119: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T09:51:45Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T09:51:45Z]] name:name2 resourceVersion:3152008 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:07578188-5f93-49d0-a8f1-ca63217aea16] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Oct 5 09:51:55.132: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T09:51:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T09:51:55Z]] name:name1 resourceVersion:3152066 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0bb9464a-1a39-4e9d-a132-b50f60f1436d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Oct 5 09:52:05.146: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T09:51:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T09:52:05Z]] name:name2 resourceVersion:3152111 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:07578188-5f93-49d0-a8f1-ca63217aea16] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Oct 5 09:52:15.159: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T09:51:35Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T09:51:55Z]] name:name1 resourceVersion:3152168 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0bb9464a-1a39-4e9d-a132-b50f60f1436d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Oct 5 09:52:25.173: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-10-05T09:51:45Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-10-05T09:52:05Z]] name:name2 resourceVersion:3152219 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:07578188-5f93-49d0-a8f1-ca63217aea16] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:52:35.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9063" for this suite. • [SLOW TEST:61.772 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":303,"completed":38,"skipped":686,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:52:35.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Oct 5 09:52:35.774: INFO: >>> kubeConfig: /root/.kube/config Oct 5 09:52:46.455: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:53:59.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6971" for this suite. • [SLOW TEST:83.422 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":303,"completed":39,"skipped":693,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:53:59.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 09:53:59.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3b3608e-31a4-4bc3-a398-2590887acf14" in namespace "projected-612" to be "Succeeded or Failed" Oct 5 09:53:59.244: INFO: Pod "downwardapi-volume-b3b3608e-31a4-4bc3-a398-2590887acf14": Phase="Pending", Reason="", readiness=false. Elapsed: 48.258272ms Oct 5 09:54:01.251: INFO: Pod "downwardapi-volume-b3b3608e-31a4-4bc3-a398-2590887acf14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054908403s Oct 5 09:54:03.258: INFO: Pod "downwardapi-volume-b3b3608e-31a4-4bc3-a398-2590887acf14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062438107s STEP: Saw pod success Oct 5 09:54:03.258: INFO: Pod "downwardapi-volume-b3b3608e-31a4-4bc3-a398-2590887acf14" satisfied condition "Succeeded or Failed" Oct 5 09:54:03.266: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b3b3608e-31a4-4bc3-a398-2590887acf14 container client-container: STEP: delete the pod Oct 5 09:54:03.297: INFO: Waiting for pod downwardapi-volume-b3b3608e-31a4-4bc3-a398-2590887acf14 to disappear Oct 5 09:54:03.305: INFO: Pod downwardapi-volume-b3b3608e-31a4-4bc3-a398-2590887acf14 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:54:03.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-612" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":40,"skipped":694,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:54:03.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's args Oct 5 09:54:03.661: INFO: Waiting up to 5m0s for pod "var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f" in namespace "var-expansion-5934" to be "Succeeded or Failed" Oct 5 09:54:03.671: INFO: Pod "var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.395766ms Oct 5 09:54:05.763: INFO: Pod "var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102391487s Oct 5 09:54:07.772: INFO: Pod "var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f": Phase="Running", Reason="", readiness=true. Elapsed: 4.110812429s Oct 5 09:54:09.781: INFO: Pod "var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11995364s STEP: Saw pod success Oct 5 09:54:09.781: INFO: Pod "var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f" satisfied condition "Succeeded or Failed" Oct 5 09:54:09.787: INFO: Trying to get logs from node kali-worker pod var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f container dapi-container: STEP: delete the pod Oct 5 09:54:09.849: INFO: Waiting for pod var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f to disappear Oct 5 09:54:09.875: INFO: Pod var-expansion-4dd7d639-3e3f-46a3-9cdc-1d93af8c342f no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:54:09.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5934" for this suite. • [SLOW TEST:6.569 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":303,"completed":41,"skipped":718,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:54:09.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:54:21.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6557" for this suite. • [SLOW TEST:11.364 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":303,"completed":42,"skipped":723,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:54:21.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 5 09:54:21.339: INFO: Waiting up to 5m0s for pod "pod-e3d11129-6aae-4676-89e2-9486a3d3e65f" in namespace "emptydir-7925" to be "Succeeded or Failed" Oct 5 09:54:21.364: INFO: Pod "pod-e3d11129-6aae-4676-89e2-9486a3d3e65f": Phase="Pending", Reason="", readiness=false. Elapsed: 24.138044ms Oct 5 09:54:23.406: INFO: Pod "pod-e3d11129-6aae-4676-89e2-9486a3d3e65f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066073992s Oct 5 09:54:25.414: INFO: Pod "pod-e3d11129-6aae-4676-89e2-9486a3d3e65f": Phase="Running", Reason="", readiness=true. Elapsed: 4.07474893s Oct 5 09:54:27.423: INFO: Pod "pod-e3d11129-6aae-4676-89e2-9486a3d3e65f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.083617241s STEP: Saw pod success Oct 5 09:54:27.424: INFO: Pod "pod-e3d11129-6aae-4676-89e2-9486a3d3e65f" satisfied condition "Succeeded or Failed" Oct 5 09:54:27.430: INFO: Trying to get logs from node kali-worker pod pod-e3d11129-6aae-4676-89e2-9486a3d3e65f container test-container: STEP: delete the pod Oct 5 09:54:27.452: INFO: Waiting for pod pod-e3d11129-6aae-4676-89e2-9486a3d3e65f to disappear Oct 5 09:54:27.482: INFO: Pod pod-e3d11129-6aae-4676-89e2-9486a3d3e65f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:54:27.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7925" for this suite. • [SLOW TEST:6.240 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":43,"skipped":747,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:54:27.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:54:43.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7743" for this suite. • [SLOW TEST:16.282 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":303,"completed":44,"skipped":784,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:54:43.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 5 09:54:58.827: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 5 09:55:00.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488498, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488498, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488498, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488498, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 09:55:03.881: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:55:03.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:55:05.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6853" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:21.508 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":303,"completed":45,"skipped":795,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:55:05.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:55:09.491: INFO: Waiting up to 5m0s for pod "client-envvars-ceb6d664-df22-4f34-8cb3-80e785099dff" in namespace "pods-2632" to be "Succeeded or Failed" Oct 5 09:55:09.503: INFO: Pod "client-envvars-ceb6d664-df22-4f34-8cb3-80e785099dff": Phase="Pending", Reason="", readiness=false. Elapsed: 12.137887ms Oct 5 09:55:11.580: INFO: Pod "client-envvars-ceb6d664-df22-4f34-8cb3-80e785099dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08927444s Oct 5 09:55:13.588: INFO: Pod "client-envvars-ceb6d664-df22-4f34-8cb3-80e785099dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096716905s STEP: Saw pod success Oct 5 09:55:13.588: INFO: Pod "client-envvars-ceb6d664-df22-4f34-8cb3-80e785099dff" satisfied condition "Succeeded or Failed" Oct 5 09:55:13.593: INFO: Trying to get logs from node kali-worker pod client-envvars-ceb6d664-df22-4f34-8cb3-80e785099dff container env3cont: STEP: delete the pod Oct 5 09:55:13.645: INFO: Waiting for pod client-envvars-ceb6d664-df22-4f34-8cb3-80e785099dff to disappear Oct 5 09:55:13.656: INFO: Pod client-envvars-ceb6d664-df22-4f34-8cb3-80e785099dff no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:55:13.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2632" for this suite. • [SLOW TEST:8.371 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":303,"completed":46,"skipped":826,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:55:13.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-99128ecc-e2b4-4329-9335-b10dd2b8e680 STEP: Creating configMap with name cm-test-opt-upd-0a1fc3aa-5eb1-4a00-b40c-587646ddc0ff STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-99128ecc-e2b4-4329-9335-b10dd2b8e680 STEP: Updating configmap cm-test-opt-upd-0a1fc3aa-5eb1-4a00-b40c-587646ddc0ff STEP: Creating configMap with name cm-test-opt-create-6b69cebc-60b2-4ae5-9248-6f510e55d59a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:55:23.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4522" for this suite. • [SLOW TEST:10.333 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":47,"skipped":835,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:55:24.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-5ae93f4b-4de9-40a0-a5f3-f37a875c7175 STEP: Creating a pod to test consume configMaps Oct 5 09:55:24.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d978855a-7565-4fd4-98ca-dd45548689a5" in namespace "projected-8770" to be "Succeeded or Failed" Oct 5 09:55:24.170: INFO: Pod "pod-projected-configmaps-d978855a-7565-4fd4-98ca-dd45548689a5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.236784ms Oct 5 09:55:26.203: INFO: Pod "pod-projected-configmaps-d978855a-7565-4fd4-98ca-dd45548689a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066802687s Oct 5 09:55:28.211: INFO: Pod "pod-projected-configmaps-d978855a-7565-4fd4-98ca-dd45548689a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074371481s STEP: Saw pod success Oct 5 09:55:28.211: INFO: Pod "pod-projected-configmaps-d978855a-7565-4fd4-98ca-dd45548689a5" satisfied condition "Succeeded or Failed" Oct 5 09:55:28.215: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-d978855a-7565-4fd4-98ca-dd45548689a5 container projected-configmap-volume-test: STEP: delete the pod Oct 5 09:55:28.250: INFO: Waiting for pod pod-projected-configmaps-d978855a-7565-4fd4-98ca-dd45548689a5 to disappear Oct 5 09:55:28.254: INFO: Pod pod-projected-configmaps-d978855a-7565-4fd4-98ca-dd45548689a5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:55:28.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8770" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":48,"skipped":842,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:55:28.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check is all data is printed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:55:28.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config version' Oct 5 09:55:29.714: INFO: stderr: "" Oct 5 09:55:29.714: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.2\", GitCommit:\"f5743093fd1c663cb0cbc89748f730662345d44d\", GitTreeState:\"clean\", BuildDate:\"2020-09-16T13:41:02Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"19\", GitVersion:\"v1.19.0\", GitCommit:\"e19964183377d0ec2052d1f1fa930c4d7575bd50\", GitTreeState:\"clean\", BuildDate:\"2020-08-28T22:11:08Z\", GoVersion:\"go1.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:55:29.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5940" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":303,"completed":49,"skipped":858,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:55:29.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:55:29.849: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9c3c358b-59af-425b-b0da-30199df9e2a7" in namespace "security-context-test-6126" to be "Succeeded or Failed" Oct 5 09:55:29.879: INFO: Pod "busybox-user-65534-9c3c358b-59af-425b-b0da-30199df9e2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 29.334623ms Oct 5 09:55:32.024: INFO: Pod "busybox-user-65534-9c3c358b-59af-425b-b0da-30199df9e2a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174153003s Oct 5 09:55:34.032: INFO: Pod "busybox-user-65534-9c3c358b-59af-425b-b0da-30199df9e2a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.182426293s Oct 5 09:55:34.032: INFO: Pod "busybox-user-65534-9c3c358b-59af-425b-b0da-30199df9e2a7" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:55:34.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6126" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":50,"skipped":860,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:55:34.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: set up a multi version CRD Oct 5 09:55:34.145: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:57:17.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-22" for this suite. • [SLOW TEST:103.226 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":303,"completed":51,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:57:17.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service endpoint-test2 in namespace services-2352 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2352 to expose endpoints map[] Oct 5 09:57:17.432: INFO: successfully validated that service endpoint-test2 in namespace services-2352 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-2352 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2352 to expose endpoints map[pod1:[80]] Oct 5 09:57:20.491: INFO: successfully validated that service endpoint-test2 in namespace services-2352 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-2352 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2352 to expose endpoints map[pod1:[80] pod2:[80]] Oct 5 09:57:23.602: INFO: successfully validated that service endpoint-test2 in namespace services-2352 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-2352 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2352 to expose endpoints map[pod2:[80]] Oct 5 09:57:23.702: INFO: successfully validated that service endpoint-test2 in namespace services-2352 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-2352 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2352 to expose endpoints map[] Oct 5 09:57:23.941: INFO: successfully validated that service endpoint-test2 in namespace services-2352 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:57:23.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2352" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:6.840 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":303,"completed":52,"skipped":896,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:57:24.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 09:59:24.218: INFO: Deleting pod "var-expansion-905f0301-c7f1-456a-b733-efdf4db29b7d" in namespace "var-expansion-4042" Oct 5 09:59:24.226: INFO: Wait up to 5m0s for pod "var-expansion-905f0301-c7f1-456a-b733-efdf4db29b7d" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:59:28.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4042" for this suite. • [SLOW TEST:124.179 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":303,"completed":53,"skipped":905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:59:28.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-29e10e9f-ce3b-4070-8796-c63e2c0b9d71 STEP: Creating a pod to test consume configMaps Oct 5 09:59:28.397: INFO: Waiting up to 5m0s for pod "pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d" in namespace "configmap-1104" to be "Succeeded or Failed" Oct 5 09:59:28.405: INFO: Pod "pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.999778ms Oct 5 09:59:30.419: INFO: Pod "pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021376877s Oct 5 09:59:32.428: INFO: Pod "pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d": Phase="Running", Reason="", readiness=true. Elapsed: 4.030790423s Oct 5 09:59:34.436: INFO: Pod "pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038380342s STEP: Saw pod success Oct 5 09:59:34.436: INFO: Pod "pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d" satisfied condition "Succeeded or Failed" Oct 5 09:59:34.443: INFO: Trying to get logs from node kali-worker pod pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d container configmap-volume-test: STEP: delete the pod Oct 5 09:59:34.491: INFO: Waiting for pod pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d to disappear Oct 5 09:59:34.503: INFO: Pod pod-configmaps-b59bdc30-60f6-40f8-b8a7-feeb97a30f6d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 09:59:34.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1104" for this suite. • [SLOW TEST:6.213 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":54,"skipped":962,"failed":0} [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 09:59:34.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 5 09:59:34.642: INFO: PodSpec: initContainers in spec.initContainers Oct 5 10:00:27.687: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-6536127c-bf1b-497c-b926-60e219a32797", GenerateName:"", Namespace:"init-container-7132", SelfLink:"/api/v1/namespaces/init-container-7132/pods/pod-init-6536127c-bf1b-497c-b926-60e219a32797", UID:"199d3333-d707-47c5-a48a-01863944a61e", ResourceVersion:"3154197", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63737488774, loc:(*time.Location)(0x5d1d160)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"641583793"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x8c5b040), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x938a7d0)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0x8c5b060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x938a7e0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mk5hl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x8c5b080), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mk5hl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mk5hl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mk5hl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x9380ac8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x847d080), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x9380b50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x9380b70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x9380b78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x9380b7c), PreemptionPolicy:(*v1.PreemptionPolicy)(0x69080f8), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488774, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488774, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488774, loc:(*time.Location)(0x5d1d160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737488774, loc:(*time.Location)(0x5d1d160)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.12", PodIP:"10.244.2.249", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.249"}}, StartTime:(*v1.Time)(0x8c5b120), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x97839f0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x9783a40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://387c8b2d06796c02d072a87380e5449d4bb78e40bd55cccb3039c81ebe86481f", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x938a800), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x938a7f0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0x9380bff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:00:27.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7132" for this suite. • [SLOW TEST:53.212 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":303,"completed":55,"skipped":962,"failed":0} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:00:27.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9099 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9099 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9099 Oct 5 10:00:27.865: INFO: Found 0 stateful pods, waiting for 1 Oct 5 10:00:37.873: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Oct 5 10:00:37.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 10:00:42.243: INFO: stderr: "I1005 10:00:42.080072 471 log.go:181] (0x267e150) (0x267fc00) Create stream\nI1005 10:00:42.085990 471 log.go:181] (0x267e150) (0x267fc00) Stream added, broadcasting: 1\nI1005 10:00:42.098502 471 log.go:181] (0x267e150) Reply frame received for 1\nI1005 10:00:42.099102 471 log.go:181] (0x267e150) (0x27fe380) Create stream\nI1005 10:00:42.099185 471 log.go:181] (0x267e150) (0x27fe380) Stream added, broadcasting: 3\nI1005 10:00:42.101007 471 log.go:181] (0x267e150) Reply frame received for 3\nI1005 10:00:42.101494 471 log.go:181] (0x267e150) (0x27b70a0) Create stream\nI1005 10:00:42.101643 471 log.go:181] (0x267e150) (0x27b70a0) Stream added, broadcasting: 5\nI1005 10:00:42.103234 471 log.go:181] (0x267e150) Reply frame received for 5\nI1005 10:00:42.192222 471 log.go:181] (0x267e150) Data frame received for 5\nI1005 10:00:42.192545 471 log.go:181] (0x27b70a0) (5) Data frame handling\nI1005 10:00:42.193185 471 log.go:181] (0x27b70a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 10:00:42.224415 471 log.go:181] (0x267e150) Data frame received for 5\nI1005 10:00:42.224642 471 log.go:181] (0x27b70a0) (5) Data frame handling\nI1005 10:00:42.225246 471 log.go:181] (0x267e150) Data frame received for 3\nI1005 10:00:42.225531 471 log.go:181] (0x27fe380) (3) Data frame handling\nI1005 10:00:42.225762 471 log.go:181] (0x27fe380) (3) Data frame sent\nI1005 10:00:42.225952 471 log.go:181] (0x267e150) Data frame received for 3\nI1005 10:00:42.226214 471 log.go:181] (0x27fe380) (3) Data frame handling\nI1005 10:00:42.227587 471 log.go:181] (0x267e150) Data frame received for 1\nI1005 10:00:42.227753 471 log.go:181] (0x267fc00) (1) Data frame handling\nI1005 10:00:42.227941 471 log.go:181] (0x267fc00) (1) Data frame sent\nI1005 10:00:42.228703 471 log.go:181] (0x267e150) (0x267fc00) Stream removed, broadcasting: 1\nI1005 10:00:42.231641 471 log.go:181] (0x267e150) Go away received\nI1005 10:00:42.235183 471 log.go:181] (0x267e150) (0x267fc00) Stream removed, broadcasting: 1\nI1005 10:00:42.235408 471 log.go:181] (0x267e150) (0x27fe380) Stream removed, broadcasting: 3\nI1005 10:00:42.235560 471 log.go:181] (0x267e150) (0x27b70a0) Stream removed, broadcasting: 5\n" Oct 5 10:00:42.244: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 10:00:42.244: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 10:00:42.254: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Oct 5 10:00:52.261: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 5 10:00:52.262: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 10:00:52.293: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999984925s Oct 5 10:00:53.302: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.982890178s Oct 5 10:00:54.311: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.974030405s Oct 5 10:00:55.319: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.964894819s Oct 5 10:00:56.326: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.957686624s Oct 5 10:00:57.336: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.949855532s Oct 5 10:00:58.346: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.940732802s Oct 5 10:00:59.355: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.930135577s Oct 5 10:01:00.363: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.921050581s Oct 5 10:01:01.374: INFO: Verifying statefulset ss doesn't scale past 1 for another 912.91825ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9099 Oct 5 10:01:02.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:01:03.847: INFO: stderr: "I1005 10:01:03.734419 491 log.go:181] (0x295e8c0) (0x295e930) Create stream\nI1005 10:01:03.736112 491 log.go:181] (0x295e8c0) (0x295e930) Stream added, broadcasting: 1\nI1005 10:01:03.744586 491 log.go:181] (0x295e8c0) Reply frame received for 1\nI1005 10:01:03.745265 491 log.go:181] (0x295e8c0) (0x2800070) Create stream\nI1005 10:01:03.747903 491 log.go:181] (0x295e8c0) (0x2800070) Stream added, broadcasting: 3\nI1005 10:01:03.755702 491 log.go:181] (0x295e8c0) Reply frame received for 3\nI1005 10:01:03.756066 491 log.go:181] (0x295e8c0) (0x2bf60e0) Create stream\nI1005 10:01:03.756153 491 log.go:181] (0x295e8c0) (0x2bf60e0) Stream added, broadcasting: 5\nI1005 10:01:03.757407 491 log.go:181] (0x295e8c0) Reply frame received for 5\nI1005 10:01:03.827057 491 log.go:181] (0x295e8c0) Data frame received for 3\nI1005 10:01:03.827402 491 log.go:181] (0x295e8c0) Data frame received for 5\nI1005 10:01:03.827755 491 log.go:181] (0x2bf60e0) (5) Data frame handling\nI1005 10:01:03.828013 491 log.go:181] (0x2800070) (3) Data frame handling\nI1005 10:01:03.828915 491 log.go:181] (0x295e8c0) Data frame received for 1\nI1005 10:01:03.829022 491 log.go:181] (0x295e930) (1) Data frame handling\nI1005 10:01:03.829144 491 log.go:181] (0x295e930) (1) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 10:01:03.829686 491 log.go:181] (0x2800070) (3) Data frame sent\nI1005 10:01:03.829991 491 log.go:181] (0x2bf60e0) (5) Data frame sent\nI1005 10:01:03.830264 491 log.go:181] (0x295e8c0) Data frame received for 5\nI1005 10:01:03.830357 491 log.go:181] (0x2bf60e0) (5) Data frame handling\nI1005 10:01:03.830437 491 log.go:181] (0x295e8c0) Data frame received for 3\nI1005 10:01:03.830561 491 log.go:181] (0x2800070) (3) Data frame handling\nI1005 10:01:03.831854 491 log.go:181] (0x295e8c0) (0x295e930) Stream removed, broadcasting: 1\nI1005 10:01:03.833374 491 log.go:181] (0x295e8c0) Go away received\nI1005 10:01:03.836926 491 log.go:181] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0x2800070), 0x5:(*spdystream.Stream)(0x2bf60e0)}\nI1005 10:01:03.837405 491 log.go:181] (0x295e8c0) (0x295e930) Stream removed, broadcasting: 1\nI1005 10:01:03.838019 491 log.go:181] (0x295e8c0) (0x2800070) Stream removed, broadcasting: 3\nI1005 10:01:03.838413 491 log.go:181] (0x295e8c0) (0x2bf60e0) Stream removed, broadcasting: 5\n" Oct 5 10:01:03.847: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 10:01:03.847: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 10:01:03.854: INFO: Found 1 stateful pods, waiting for 3 Oct 5 10:01:13.866: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 10:01:13.866: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 10:01:13.866: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Oct 5 10:01:13.883: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 10:01:15.371: INFO: stderr: "I1005 10:01:15.246140 511 log.go:181] (0x2948000) (0x2948070) Create stream\nI1005 10:01:15.247838 511 log.go:181] (0x2948000) (0x2948070) Stream added, broadcasting: 1\nI1005 10:01:15.267869 511 log.go:181] (0x2948000) Reply frame received for 1\nI1005 10:01:15.269121 511 log.go:181] (0x2948000) (0x30b6000) Create stream\nI1005 10:01:15.269314 511 log.go:181] (0x2948000) (0x30b6000) Stream added, broadcasting: 3\nI1005 10:01:15.273690 511 log.go:181] (0x2948000) Reply frame received for 3\nI1005 10:01:15.273960 511 log.go:181] (0x2948000) (0x2b14460) Create stream\nI1005 10:01:15.274029 511 log.go:181] (0x2948000) (0x2b14460) Stream added, broadcasting: 5\nI1005 10:01:15.275077 511 log.go:181] (0x2948000) Reply frame received for 5\nI1005 10:01:15.353853 511 log.go:181] (0x2948000) Data frame received for 3\nI1005 10:01:15.354132 511 log.go:181] (0x30b6000) (3) Data frame handling\nI1005 10:01:15.354433 511 log.go:181] (0x2948000) Data frame received for 5\nI1005 10:01:15.354664 511 log.go:181] (0x2b14460) (5) Data frame handling\nI1005 10:01:15.355023 511 log.go:181] (0x2948000) Data frame received for 1\nI1005 10:01:15.355144 511 log.go:181] (0x2948070) (1) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 10:01:15.355729 511 log.go:181] (0x2948070) (1) Data frame sent\nI1005 10:01:15.355812 511 log.go:181] (0x2b14460) (5) Data frame sent\nI1005 10:01:15.355973 511 log.go:181] (0x30b6000) (3) Data frame sent\nI1005 10:01:15.356069 511 log.go:181] (0x2948000) Data frame received for 3\nI1005 10:01:15.356162 511 log.go:181] (0x30b6000) (3) Data frame handling\nI1005 10:01:15.356318 511 log.go:181] (0x2948000) Data frame received for 5\nI1005 10:01:15.356421 511 log.go:181] (0x2b14460) (5) Data frame handling\nI1005 10:01:15.357821 511 log.go:181] (0x2948000) (0x2948070) Stream removed, broadcasting: 1\nI1005 10:01:15.359826 511 log.go:181] (0x2948000) Go away received\nI1005 10:01:15.362820 511 log.go:181] (0x2948000) (0x2948070) Stream removed, broadcasting: 1\nI1005 10:01:15.363325 511 log.go:181] (0x2948000) (0x30b6000) Stream removed, broadcasting: 3\nI1005 10:01:15.363457 511 log.go:181] (0x2948000) (0x2b14460) Stream removed, broadcasting: 5\n" Oct 5 10:01:15.371: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 10:01:15.371: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 10:01:15.372: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 10:01:16.939: INFO: stderr: "I1005 10:01:16.739503 531 log.go:181] (0x285c2a0) (0x285c310) Create stream\nI1005 10:01:16.742021 531 log.go:181] (0x285c2a0) (0x285c310) Stream added, broadcasting: 1\nI1005 10:01:16.774392 531 log.go:181] (0x285c2a0) Reply frame received for 1\nI1005 10:01:16.775217 531 log.go:181] (0x285c2a0) (0x285c460) Create stream\nI1005 10:01:16.775329 531 log.go:181] (0x285c2a0) (0x285c460) Stream added, broadcasting: 3\nI1005 10:01:16.777206 531 log.go:181] (0x285c2a0) Reply frame received for 3\nI1005 10:01:16.777415 531 log.go:181] (0x285c2a0) (0x2f30070) Create stream\nI1005 10:01:16.777476 531 log.go:181] (0x285c2a0) (0x2f30070) Stream added, broadcasting: 5\nI1005 10:01:16.778725 531 log.go:181] (0x285c2a0) Reply frame received for 5\nI1005 10:01:16.856185 531 log.go:181] (0x285c2a0) Data frame received for 5\nI1005 10:01:16.856435 531 log.go:181] (0x2f30070) (5) Data frame handling\nI1005 10:01:16.857035 531 log.go:181] (0x2f30070) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 10:01:16.923271 531 log.go:181] (0x285c2a0) Data frame received for 3\nI1005 10:01:16.923510 531 log.go:181] (0x285c2a0) Data frame received for 5\nI1005 10:01:16.923773 531 log.go:181] (0x2f30070) (5) Data frame handling\nI1005 10:01:16.923929 531 log.go:181] (0x285c460) (3) Data frame handling\nI1005 10:01:16.924118 531 log.go:181] (0x285c460) (3) Data frame sent\nI1005 10:01:16.924251 531 log.go:181] (0x285c2a0) Data frame received for 3\nI1005 10:01:16.924404 531 log.go:181] (0x285c460) (3) Data frame handling\nI1005 10:01:16.925470 531 log.go:181] (0x285c2a0) Data frame received for 1\nI1005 10:01:16.925642 531 log.go:181] (0x285c310) (1) Data frame handling\nI1005 10:01:16.925824 531 log.go:181] (0x285c310) (1) Data frame sent\nI1005 10:01:16.927476 531 log.go:181] (0x285c2a0) (0x285c310) Stream removed, broadcasting: 1\nI1005 10:01:16.928711 531 log.go:181] (0x285c2a0) Go away received\nI1005 10:01:16.931040 531 log.go:181] (0x285c2a0) (0x285c310) Stream removed, broadcasting: 1\nI1005 10:01:16.931370 531 log.go:181] (0x285c2a0) (0x285c460) Stream removed, broadcasting: 3\nI1005 10:01:16.931658 531 log.go:181] (0x285c2a0) (0x2f30070) Stream removed, broadcasting: 5\n" Oct 5 10:01:16.940: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 10:01:16.940: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 10:01:16.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 10:01:18.559: INFO: stderr: "I1005 10:01:18.416174 551 log.go:181] (0x2db4000) (0x2db4070) Create stream\nI1005 10:01:18.420083 551 log.go:181] (0x2db4000) (0x2db4070) Stream added, broadcasting: 1\nI1005 10:01:18.428320 551 log.go:181] (0x2db4000) Reply frame received for 1\nI1005 10:01:18.428770 551 log.go:181] (0x2db4000) (0x2db4230) Create stream\nI1005 10:01:18.428892 551 log.go:181] (0x2db4000) (0x2db4230) Stream added, broadcasting: 3\nI1005 10:01:18.430245 551 log.go:181] (0x2db4000) Reply frame received for 3\nI1005 10:01:18.430478 551 log.go:181] (0x2db4000) (0x2938850) Create stream\nI1005 10:01:18.430535 551 log.go:181] (0x2db4000) (0x2938850) Stream added, broadcasting: 5\nI1005 10:01:18.431707 551 log.go:181] (0x2db4000) Reply frame received for 5\nI1005 10:01:18.513967 551 log.go:181] (0x2db4000) Data frame received for 5\nI1005 10:01:18.514291 551 log.go:181] (0x2938850) (5) Data frame handling\nI1005 10:01:18.514920 551 log.go:181] (0x2938850) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 10:01:18.541088 551 log.go:181] (0x2db4000) Data frame received for 3\nI1005 10:01:18.541318 551 log.go:181] (0x2db4230) (3) Data frame handling\nI1005 10:01:18.541595 551 log.go:181] (0x2db4000) Data frame received for 5\nI1005 10:01:18.541830 551 log.go:181] (0x2938850) (5) Data frame handling\nI1005 10:01:18.542115 551 log.go:181] (0x2db4230) (3) Data frame sent\nI1005 10:01:18.542352 551 log.go:181] (0x2db4000) Data frame received for 3\nI1005 10:01:18.542503 551 log.go:181] (0x2db4230) (3) Data frame handling\nI1005 10:01:18.543589 551 log.go:181] (0x2db4000) Data frame received for 1\nI1005 10:01:18.543743 551 log.go:181] (0x2db4070) (1) Data frame handling\nI1005 10:01:18.543919 551 log.go:181] (0x2db4070) (1) Data frame sent\nI1005 10:01:18.545632 551 log.go:181] (0x2db4000) (0x2db4070) Stream removed, broadcasting: 1\nI1005 10:01:18.546816 551 log.go:181] (0x2db4000) Go away received\nI1005 10:01:18.551028 551 log.go:181] (0x2db4000) (0x2db4070) Stream removed, broadcasting: 1\nI1005 10:01:18.551272 551 log.go:181] (0x2db4000) (0x2db4230) Stream removed, broadcasting: 3\nI1005 10:01:18.551439 551 log.go:181] (0x2db4000) (0x2938850) Stream removed, broadcasting: 5\n" Oct 5 10:01:18.559: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 10:01:18.560: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 10:01:18.560: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 10:01:18.566: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Oct 5 10:01:28.584: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Oct 5 10:01:28.584: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Oct 5 10:01:28.584: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Oct 5 10:01:28.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999986422s Oct 5 10:01:29.613: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991532145s Oct 5 10:01:30.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981717097s Oct 5 10:01:31.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970839608s Oct 5 10:01:32.646: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.961067705s Oct 5 10:01:33.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.948949865s Oct 5 10:01:34.668: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.938011463s Oct 5 10:01:35.679: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.92698993s Oct 5 10:01:36.691: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.916312511s Oct 5 10:01:37.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 904.603356ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9099 Oct 5 10:01:38.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:01:40.383: INFO: stderr: "I1005 10:01:40.198439 572 log.go:181] (0x2f220e0) (0x2f22150) Create stream\nI1005 10:01:40.200111 572 log.go:181] (0x2f220e0) (0x2f22150) Stream added, broadcasting: 1\nI1005 10:01:40.207063 572 log.go:181] (0x2f220e0) Reply frame received for 1\nI1005 10:01:40.207477 572 log.go:181] (0x2f220e0) (0x29f7180) Create stream\nI1005 10:01:40.207536 572 log.go:181] (0x2f220e0) (0x29f7180) Stream added, broadcasting: 3\nI1005 10:01:40.209085 572 log.go:181] (0x2f220e0) Reply frame received for 3\nI1005 10:01:40.209504 572 log.go:181] (0x2f220e0) (0x2f22310) Create stream\nI1005 10:01:40.209612 572 log.go:181] (0x2f220e0) (0x2f22310) Stream added, broadcasting: 5\nI1005 10:01:40.211231 572 log.go:181] (0x2f220e0) Reply frame received for 5\nI1005 10:01:40.289467 572 log.go:181] (0x2f220e0) Data frame received for 3\nI1005 10:01:40.290416 572 log.go:181] (0x2f220e0) Data frame received for 5\nI1005 10:01:40.290782 572 log.go:181] (0x2f22310) (5) Data frame handling\nI1005 10:01:40.291164 572 log.go:181] (0x2f220e0) Data frame received for 1\nI1005 10:01:40.291297 572 log.go:181] (0x2f22150) (1) Data frame handling\nI1005 10:01:40.291503 572 log.go:181] (0x29f7180) (3) Data frame handling\nI1005 10:01:40.292261 572 log.go:181] (0x2f22310) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 10:01:40.293284 572 log.go:181] (0x29f7180) (3) Data frame sent\nI1005 10:01:40.293437 572 log.go:181] (0x2f220e0) Data frame received for 5\nI1005 10:01:40.293552 572 log.go:181] (0x2f220e0) Data frame received for 3\nI1005 10:01:40.293791 572 log.go:181] (0x29f7180) (3) Data frame handling\nI1005 10:01:40.370198 572 log.go:181] (0x2f22310) (5) Data frame handling\nI1005 10:01:40.372984 572 log.go:181] (0x2f22150) (1) Data frame sent\nI1005 10:01:40.374982 572 log.go:181] (0x2f220e0) (0x2f22150) Stream removed, broadcasting: 1\nI1005 10:01:40.375303 572 log.go:181] (0x2f220e0) Go away received\nI1005 10:01:40.377156 572 log.go:181] (0x2f220e0) (0x2f22150) Stream removed, broadcasting: 1\nI1005 10:01:40.377296 572 log.go:181] (0x2f220e0) (0x29f7180) Stream removed, broadcasting: 3\nI1005 10:01:40.377407 572 log.go:181] (0x2f220e0) (0x2f22310) Stream removed, broadcasting: 5\n" Oct 5 10:01:40.384: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 10:01:40.384: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 10:01:40.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:01:41.927: INFO: stderr: "I1005 10:01:41.787388 592 log.go:181] (0x2b30000) (0x2b30380) Create stream\nI1005 10:01:41.790273 592 log.go:181] (0x2b30000) (0x2b30380) Stream added, broadcasting: 1\nI1005 10:01:41.800676 592 log.go:181] (0x2b30000) Reply frame received for 1\nI1005 10:01:41.801437 592 log.go:181] (0x2b30000) (0x29e05b0) Create stream\nI1005 10:01:41.801541 592 log.go:181] (0x2b30000) (0x29e05b0) Stream added, broadcasting: 3\nI1005 10:01:41.803181 592 log.go:181] (0x2b30000) Reply frame received for 3\nI1005 10:01:41.803610 592 log.go:181] (0x2b30000) (0x2e22070) Create stream\nI1005 10:01:41.803763 592 log.go:181] (0x2b30000) (0x2e22070) Stream added, broadcasting: 5\nI1005 10:01:41.805853 592 log.go:181] (0x2b30000) Reply frame received for 5\nI1005 10:01:41.907778 592 log.go:181] (0x2b30000) Data frame received for 5\nI1005 10:01:41.908061 592 log.go:181] (0x2b30000) Data frame received for 3\nI1005 10:01:41.908267 592 log.go:181] (0x29e05b0) (3) Data frame handling\nI1005 10:01:41.909047 592 log.go:181] (0x2b30000) Data frame received for 1\nI1005 10:01:41.909285 592 log.go:181] (0x2b30380) (1) Data frame handling\nI1005 10:01:41.909575 592 log.go:181] (0x2e22070) (5) Data frame handling\nI1005 10:01:41.910905 592 log.go:181] (0x2b30380) (1) Data frame sent\nI1005 10:01:41.911352 592 log.go:181] (0x2e22070) (5) Data frame sent\nI1005 10:01:41.911563 592 log.go:181] (0x2b30000) Data frame received for 5\nI1005 10:01:41.911737 592 log.go:181] (0x2e22070) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 10:01:41.912254 592 log.go:181] (0x29e05b0) (3) Data frame sent\nI1005 10:01:41.913009 592 log.go:181] (0x2b30000) Data frame received for 3\nI1005 10:01:41.913298 592 log.go:181] (0x2b30000) (0x2b30380) Stream removed, broadcasting: 1\nI1005 10:01:41.914037 592 log.go:181] (0x29e05b0) (3) Data frame handling\nI1005 10:01:41.915657 592 log.go:181] (0x2b30000) Go away received\nI1005 10:01:41.918688 592 log.go:181] (0x2b30000) (0x2b30380) Stream removed, broadcasting: 1\nI1005 10:01:41.918976 592 log.go:181] (0x2b30000) (0x29e05b0) Stream removed, broadcasting: 3\nI1005 10:01:41.919161 592 log.go:181] (0x2b30000) (0x2e22070) Stream removed, broadcasting: 5\n" Oct 5 10:01:41.928: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 10:01:41.928: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 10:01:41.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:01:43.410: INFO: rc: 1 Oct 5 10:01:43.412: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Oct 5 10:01:53.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:01:54.734: INFO: rc: 1 Oct 5 10:01:54.734: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:02:04.736: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:02:05.968: INFO: rc: 1 Oct 5 10:02:05.968: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:02:15.969: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:02:17.225: INFO: rc: 1 Oct 5 10:02:17.225: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:02:27.227: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:02:28.504: INFO: rc: 1 Oct 5 10:02:28.504: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:02:38.505: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:02:39.727: INFO: rc: 1 Oct 5 10:02:39.727: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:02:49.728: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:02:51.037: INFO: rc: 1 Oct 5 10:02:51.038: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:03:01.038: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:03:02.267: INFO: rc: 1 Oct 5 10:03:02.267: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:03:12.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:03:13.490: INFO: rc: 1 Oct 5 10:03:13.491: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:03:23.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:03:24.745: INFO: rc: 1 Oct 5 10:03:24.746: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:03:34.747: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:03:35.996: INFO: rc: 1 Oct 5 10:03:35.996: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:03:45.998: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:03:47.308: INFO: rc: 1 Oct 5 10:03:47.309: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:03:57.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:03:58.543: INFO: rc: 1 Oct 5 10:03:58.544: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:04:08.545: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:04:09.848: INFO: rc: 1 Oct 5 10:04:09.849: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:04:19.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:04:21.078: INFO: rc: 1 Oct 5 10:04:21.079: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:04:31.080: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:04:32.311: INFO: rc: 1 Oct 5 10:04:32.311: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:04:42.312: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:04:43.561: INFO: rc: 1 Oct 5 10:04:43.561: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:04:53.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:04:54.812: INFO: rc: 1 Oct 5 10:04:54.812: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:05:04.813: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:05:06.081: INFO: rc: 1 Oct 5 10:05:06.081: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:05:16.083: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:05:17.318: INFO: rc: 1 Oct 5 10:05:17.319: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:05:27.321: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:05:28.584: INFO: rc: 1 Oct 5 10:05:28.584: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:05:38.586: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:05:39.830: INFO: rc: 1 Oct 5 10:05:39.831: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:05:49.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:05:51.083: INFO: rc: 1 Oct 5 10:05:51.083: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:06:01.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:06:02.346: INFO: rc: 1 Oct 5 10:06:02.346: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:06:12.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:06:13.686: INFO: rc: 1 Oct 5 10:06:13.687: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:06:23.688: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:06:24.956: INFO: rc: 1 Oct 5 10:06:24.957: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:06:34.958: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:06:36.204: INFO: rc: 1 Oct 5 10:06:36.205: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Oct 5 10:06:46.206: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9099 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:06:47.437: INFO: rc: 1 Oct 5 10:06:47.437: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Oct 5 10:06:47.437: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 10:06:47.458: INFO: Deleting all statefulset in ns statefulset-9099 Oct 5 10:06:47.463: INFO: Scaling statefulset ss to 0 Oct 5 10:06:47.479: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 10:06:47.483: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:06:47.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9099" for this suite. • [SLOW TEST:379.781 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":303,"completed":56,"skipped":965,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:06:47.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:06:47.595: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:06:48.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4777" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":303,"completed":57,"skipped":994,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:06:48.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:06:48.843: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ece6dede-145a-46c2-93b2-3c8dfe32b140" in namespace "projected-2297" to be "Succeeded or Failed" Oct 5 10:06:48.857: INFO: Pod "downwardapi-volume-ece6dede-145a-46c2-93b2-3c8dfe32b140": Phase="Pending", Reason="", readiness=false. Elapsed: 13.605356ms Oct 5 10:06:50.870: INFO: Pod "downwardapi-volume-ece6dede-145a-46c2-93b2-3c8dfe32b140": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026538943s Oct 5 10:06:52.877: INFO: Pod "downwardapi-volume-ece6dede-145a-46c2-93b2-3c8dfe32b140": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033492572s STEP: Saw pod success Oct 5 10:06:52.877: INFO: Pod "downwardapi-volume-ece6dede-145a-46c2-93b2-3c8dfe32b140" satisfied condition "Succeeded or Failed" Oct 5 10:06:52.881: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ece6dede-145a-46c2-93b2-3c8dfe32b140 container client-container: STEP: delete the pod Oct 5 10:06:52.951: INFO: Waiting for pod downwardapi-volume-ece6dede-145a-46c2-93b2-3c8dfe32b140 to disappear Oct 5 10:06:52.976: INFO: Pod downwardapi-volume-ece6dede-145a-46c2-93b2-3c8dfe32b140 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:06:52.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2297" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":58,"skipped":1000,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:06:52.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-5361 STEP: creating service affinity-clusterip in namespace services-5361 STEP: creating replication controller affinity-clusterip in namespace services-5361 I1005 10:06:53.136115 10 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-5361, replica count: 3 I1005 10:06:56.187969 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 10:06:59.188787 10 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 10:06:59.222: INFO: Creating new exec pod Oct 5 10:07:04.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5361 execpod-affinityzj4lc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Oct 5 10:07:05.751: INFO: stderr: "I1005 10:07:05.627286 1180 log.go:181] (0x267b8f0) (0x267b9d0) Create stream\nI1005 10:07:05.629653 1180 log.go:181] (0x267b8f0) (0x267b9d0) Stream added, broadcasting: 1\nI1005 10:07:05.642348 1180 log.go:181] (0x267b8f0) Reply frame received for 1\nI1005 10:07:05.642967 1180 log.go:181] (0x267b8f0) (0x2db2070) Create stream\nI1005 10:07:05.643067 1180 log.go:181] (0x267b8f0) (0x2db2070) Stream added, broadcasting: 3\nI1005 10:07:05.645193 1180 log.go:181] (0x267b8f0) Reply frame received for 3\nI1005 10:07:05.645519 1180 log.go:181] (0x267b8f0) (0x267bdc0) Create stream\nI1005 10:07:05.645609 1180 log.go:181] (0x267b8f0) (0x267bdc0) Stream added, broadcasting: 5\nI1005 10:07:05.651541 1180 log.go:181] (0x267b8f0) Reply frame received for 5\nI1005 10:07:05.732800 1180 log.go:181] (0x267b8f0) Data frame received for 5\nI1005 10:07:05.733098 1180 log.go:181] (0x267b8f0) Data frame received for 3\nI1005 10:07:05.733530 1180 log.go:181] (0x2db2070) (3) Data frame handling\nI1005 10:07:05.733764 1180 log.go:181] (0x267bdc0) (5) Data frame handling\nI1005 10:07:05.734361 1180 log.go:181] (0x267b8f0) Data frame received for 1\nI1005 10:07:05.734477 1180 log.go:181] (0x267b9d0) (1) Data frame handling\nI1005 10:07:05.735415 1180 log.go:181] (0x267bdc0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip 80\nI1005 10:07:05.735797 1180 log.go:181] (0x267b9d0) (1) Data frame sent\nI1005 10:07:05.736030 1180 log.go:181] (0x267b8f0) Data frame received for 5\nI1005 10:07:05.736170 1180 log.go:181] (0x267bdc0) (5) Data frame handling\nI1005 10:07:05.736320 1180 log.go:181] (0x267bdc0) (5) Data frame sent\nI1005 10:07:05.736446 1180 log.go:181] (0x267b8f0) Data frame received for 5\nI1005 10:07:05.736570 1180 log.go:181] (0x267bdc0) (5) Data frame handling\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\nI1005 10:07:05.738335 1180 log.go:181] (0x267b8f0) (0x267b9d0) Stream removed, broadcasting: 1\nI1005 10:07:05.739209 1180 log.go:181] (0x267b8f0) Go away received\nI1005 10:07:05.742302 1180 log.go:181] (0x267b8f0) (0x267b9d0) Stream removed, broadcasting: 1\nI1005 10:07:05.742520 1180 log.go:181] (0x267b8f0) (0x2db2070) Stream removed, broadcasting: 3\nI1005 10:07:05.742685 1180 log.go:181] (0x267b8f0) (0x267bdc0) Stream removed, broadcasting: 5\n" Oct 5 10:07:05.752: INFO: stdout: "" Oct 5 10:07:05.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5361 execpod-affinityzj4lc -- /bin/sh -x -c nc -zv -t -w 2 10.110.69.237 80' Oct 5 10:07:07.252: INFO: stderr: "I1005 10:07:07.142939 1201 log.go:181] (0x2cbf3b0) (0x2cbf420) Create stream\nI1005 10:07:07.144774 1201 log.go:181] (0x2cbf3b0) (0x2cbf420) Stream added, broadcasting: 1\nI1005 10:07:07.152962 1201 log.go:181] (0x2cbf3b0) Reply frame received for 1\nI1005 10:07:07.153575 1201 log.go:181] (0x2cbf3b0) (0x29ca070) Create stream\nI1005 10:07:07.153650 1201 log.go:181] (0x2cbf3b0) (0x29ca070) Stream added, broadcasting: 3\nI1005 10:07:07.155240 1201 log.go:181] (0x2cbf3b0) Reply frame received for 3\nI1005 10:07:07.155596 1201 log.go:181] (0x2cbf3b0) (0x2cbf5e0) Create stream\nI1005 10:07:07.155695 1201 log.go:181] (0x2cbf3b0) (0x2cbf5e0) Stream added, broadcasting: 5\nI1005 10:07:07.157135 1201 log.go:181] (0x2cbf3b0) Reply frame received for 5\nI1005 10:07:07.235377 1201 log.go:181] (0x2cbf3b0) Data frame received for 5\nI1005 10:07:07.235665 1201 log.go:181] (0x2cbf5e0) (5) Data frame handling\nI1005 10:07:07.235947 1201 log.go:181] (0x2cbf3b0) Data frame received for 3\nI1005 10:07:07.236106 1201 log.go:181] (0x29ca070) (3) Data frame handling\nI1005 10:07:07.236350 1201 log.go:181] (0x2cbf3b0) Data frame received for 1\nI1005 10:07:07.236502 1201 log.go:181] (0x2cbf420) (1) Data frame handling\nI1005 10:07:07.237949 1201 log.go:181] (0x2cbf5e0) (5) Data frame sent\nI1005 10:07:07.238603 1201 log.go:181] (0x2cbf420) (1) Data frame sent\nI1005 10:07:07.238947 1201 log.go:181] (0x2cbf3b0) Data frame received for 5\nI1005 10:07:07.239093 1201 log.go:181] (0x2cbf5e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.69.237 80\nConnection to 10.110.69.237 80 port [tcp/http] succeeded!\nI1005 10:07:07.239469 1201 log.go:181] (0x2cbf3b0) (0x2cbf420) Stream removed, broadcasting: 1\nI1005 10:07:07.240027 1201 log.go:181] (0x2cbf3b0) Go away received\nI1005 10:07:07.243459 1201 log.go:181] (0x2cbf3b0) (0x2cbf420) Stream removed, broadcasting: 1\nI1005 10:07:07.243743 1201 log.go:181] (0x2cbf3b0) (0x29ca070) Stream removed, broadcasting: 3\nI1005 10:07:07.243994 1201 log.go:181] (0x2cbf3b0) (0x2cbf5e0) Stream removed, broadcasting: 5\n" Oct 5 10:07:07.252: INFO: stdout: "" Oct 5 10:07:07.253: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-5361 execpod-affinityzj4lc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.110.69.237:80/ ; done' Oct 5 10:07:08.822: INFO: stderr: "I1005 10:07:08.586580 1221 log.go:181] (0x311e000) (0x311e070) Create stream\nI1005 10:07:08.588988 1221 log.go:181] (0x311e000) (0x311e070) Stream added, broadcasting: 1\nI1005 10:07:08.599627 1221 log.go:181] (0x311e000) Reply frame received for 1\nI1005 10:07:08.600324 1221 log.go:181] (0x311e000) (0x255e070) Create stream\nI1005 10:07:08.600443 1221 log.go:181] (0x311e000) (0x255e070) Stream added, broadcasting: 3\nI1005 10:07:08.602108 1221 log.go:181] (0x311e000) Reply frame received for 3\nI1005 10:07:08.602336 1221 log.go:181] (0x311e000) (0x311e230) Create stream\nI1005 10:07:08.602397 1221 log.go:181] (0x311e000) (0x311e230) Stream added, broadcasting: 5\nI1005 10:07:08.603875 1221 log.go:181] (0x311e000) Reply frame received for 5\nI1005 10:07:08.712933 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.713195 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.713475 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.713671 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.713837 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.713972 1221 log.go:181] (0x255e070) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.718465 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.718563 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.718679 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.719717 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.719818 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.719985 1221 log.go:181] (0x311e230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/I1005 10:07:08.720091 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.720167 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.720260 1221 log.go:181] (0x311e230) (5) Data frame sent\n\nI1005 10:07:08.721470 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.721553 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.721643 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.723080 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.723238 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.723414 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.723640 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.723798 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.723906 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.724032 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.724200 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.724326 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.728339 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.728439 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.728547 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.728816 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.729060 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.729220 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.729343 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.729442 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.729527 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.732963 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.733060 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.733154 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.733860 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.733947 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.734045 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.734154 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.734263 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.734351 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.738186 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.738276 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.738370 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.739098 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.739199 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.739292 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.739459 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.739604 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.739722 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.745423 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.745607 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.745769 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.745952 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.746064 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.746160 1221 log.go:181] (0x311e230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.746244 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.746316 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.746419 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.751638 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.751749 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.751870 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.752000 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.752087 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.752191 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.752287 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.752409 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.752511 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.755955 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.756051 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.756154 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.756725 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.757024 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.757205 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.757376 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.757517 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.757634 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.760749 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.760821 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.760995 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.761756 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.761827 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.761911 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.762103 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.762253 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.762375 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.765699 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.765830 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.765966 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.766287 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.766386 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.766466 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.766558 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.766635 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.766729 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.772538 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.772639 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.772741 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.773099 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.773166 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.773269 1221 log.go:181] (0x311e230) (5) Data frame sent\n+ echo\nI1005 10:07:08.773374 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.773533 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.773628 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.773745 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.773868 1221 log.go:181] (0x311e230) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.773981 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.777941 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.778071 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.778216 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.778433 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.778532 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.778614 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.778759 1221 log.go:181] (0x311e230) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.778841 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.779128 1221 log.go:181] (0x311e230) (5) Data frame sent\nI1005 10:07:08.785687 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.785828 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.785969 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.787139 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.787261 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.787368 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.787468 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.787575 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.787692 1221 log.go:181] (0x311e230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.791373 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.791494 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.791740 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.792426 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.792560 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.792640 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.792753 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.792916 1221 log.go:181] (0x311e230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.793002 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.796130 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.796213 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.796294 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.796967 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.797138 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.797242 1221 log.go:181] (0x311e230) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.110.69.237:80/\nI1005 10:07:08.797341 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.797422 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.797516 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.804096 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.804171 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.804277 1221 log.go:181] (0x255e070) (3) Data frame sent\nI1005 10:07:08.805128 1221 log.go:181] (0x311e000) Data frame received for 3\nI1005 10:07:08.805242 1221 log.go:181] (0x255e070) (3) Data frame handling\nI1005 10:07:08.805379 1221 log.go:181] (0x311e000) Data frame received for 5\nI1005 10:07:08.805534 1221 log.go:181] (0x311e230) (5) Data frame handling\nI1005 10:07:08.806701 1221 log.go:181] (0x311e000) Data frame received for 1\nI1005 10:07:08.806780 1221 log.go:181] (0x311e070) (1) Data frame handling\nI1005 10:07:08.806867 1221 log.go:181] (0x311e070) (1) Data frame sent\nI1005 10:07:08.807942 1221 log.go:181] (0x311e000) (0x311e070) Stream removed, broadcasting: 1\nI1005 10:07:08.809971 1221 log.go:181] (0x311e000) Go away received\nI1005 10:07:08.813316 1221 log.go:181] (0x311e000) (0x311e070) Stream removed, broadcasting: 1\nI1005 10:07:08.813727 1221 log.go:181] (0x311e000) (0x255e070) Stream removed, broadcasting: 3\nI1005 10:07:08.813877 1221 log.go:181] (0x311e000) (0x311e230) Stream removed, broadcasting: 5\n" Oct 5 10:07:08.828: INFO: stdout: "\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j\naffinity-clusterip-hv89j" Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.829: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.830: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.830: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.830: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.830: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.830: INFO: Received response from host: affinity-clusterip-hv89j Oct 5 10:07:08.830: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-5361, will wait for the garbage collector to delete the pods Oct 5 10:07:08.954: INFO: Deleting ReplicationController affinity-clusterip took: 7.726176ms Oct 5 10:07:09.355: INFO: Terminating ReplicationController affinity-clusterip pods took: 401.157861ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:07:18.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5361" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:25.760 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":59,"skipped":1010,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:07:18.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:07:18.873: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 5 10:07:39.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3124 create -f -' Oct 5 10:07:44.884: INFO: stderr: "" Oct 5 10:07:44.884: INFO: stdout: "e2e-test-crd-publish-openapi-6144-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 5 10:07:44.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3124 delete e2e-test-crd-publish-openapi-6144-crds test-cr' Oct 5 10:07:46.083: INFO: stderr: "" Oct 5 10:07:46.083: INFO: stdout: "e2e-test-crd-publish-openapi-6144-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Oct 5 10:07:46.084: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3124 apply -f -' Oct 5 10:07:49.317: INFO: stderr: "" Oct 5 10:07:49.317: INFO: stdout: "e2e-test-crd-publish-openapi-6144-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Oct 5 10:07:49.317: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3124 delete e2e-test-crd-publish-openapi-6144-crds test-cr' Oct 5 10:07:50.514: INFO: stderr: "" Oct 5 10:07:50.514: INFO: stdout: "e2e-test-crd-publish-openapi-6144-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 5 10:07:50.515: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6144-crds' Oct 5 10:07:52.889: INFO: stderr: "" Oct 5 10:07:52.890: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6144-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:08:03.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3124" for this suite. • [SLOW TEST:44.746 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":303,"completed":60,"skipped":1023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:08:03.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1280.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1280.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1280.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-1280.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-1280.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1280.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 10:08:11.664: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:11.667: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:11.671: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:11.675: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:11.686: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:11.689: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:11.692: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:11.696: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:11.703: INFO: Lookups using dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local] Oct 5 10:08:16.714: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:16.719: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:16.723: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:16.728: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:16.741: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:16.745: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:16.749: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:16.754: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:16.763: INFO: Lookups using dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local] Oct 5 10:08:21.713: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:21.718: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:21.721: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:21.724: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:21.734: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:21.738: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:21.742: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:21.746: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:21.755: INFO: Lookups using dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local] Oct 5 10:08:26.711: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:26.716: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:26.720: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:26.724: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:26.735: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:26.739: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:26.743: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:26.747: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:26.757: INFO: Lookups using dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local] Oct 5 10:08:31.711: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:31.717: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:31.721: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:31.726: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:31.738: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:31.742: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:31.746: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:31.749: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:31.757: INFO: Lookups using dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local] Oct 5 10:08:36.711: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:36.717: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:36.721: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:36.725: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:36.737: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:36.741: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:36.746: INFO: Unable to read jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:36.750: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local from pod dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4: the server could not find the requested resource (get pods dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4) Oct 5 10:08:36.760: INFO: Lookups using dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local wheezy_udp@dns-test-service-2.dns-1280.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-1280.svc.cluster.local jessie_udp@dns-test-service-2.dns-1280.svc.cluster.local jessie_tcp@dns-test-service-2.dns-1280.svc.cluster.local] Oct 5 10:08:41.759: INFO: DNS probes using dns-1280/dns-test-ed43206e-b62f-4bbf-9a0e-92ec7c2ca5d4 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:08:42.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1280" for this suite. • [SLOW TEST:39.033 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":303,"completed":61,"skipped":1060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:08:42.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-9527 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 5 10:08:42.673: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 5 10:08:42.789: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:08:45.088: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:08:46.807: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:08:48.796: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:08:50.796: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:08:52.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:08:54.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:08:56.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:08:58.799: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:09:00.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:09:02.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:09:04.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:09:06.796: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 5 10:09:06.804: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 5 10:09:10.846: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostname&protocol=http&host=10.244.2.253&port=8080&tries=1'] Namespace:pod-network-test-9527 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 10:09:10.847: INFO: >>> kubeConfig: /root/.kube/config I1005 10:09:10.990110 10 log.go:181] (0x88ec540) (0x88ec5b0) Create stream I1005 10:09:10.991905 10 log.go:181] (0x88ec540) (0x88ec5b0) Stream added, broadcasting: 1 I1005 10:09:11.013890 10 log.go:181] (0x88ec540) Reply frame received for 1 I1005 10:09:11.014448 10 log.go:181] (0x88ec540) (0xa638070) Create stream I1005 10:09:11.014593 10 log.go:181] (0x88ec540) (0xa638070) Stream added, broadcasting: 3 I1005 10:09:11.016282 10 log.go:181] (0x88ec540) Reply frame received for 3 I1005 10:09:11.017019 10 log.go:181] (0x88ec540) (0xa638230) Create stream I1005 10:09:11.017263 10 log.go:181] (0x88ec540) (0xa638230) Stream added, broadcasting: 5 I1005 10:09:11.018741 10 log.go:181] (0x88ec540) Reply frame received for 5 I1005 10:09:11.089581 10 log.go:181] (0x88ec540) Data frame received for 3 I1005 10:09:11.089848 10 log.go:181] (0xa638070) (3) Data frame handling I1005 10:09:11.090085 10 log.go:181] (0x88ec540) Data frame received for 5 I1005 10:09:11.090269 10 log.go:181] (0xa638230) (5) Data frame handling I1005 10:09:11.090445 10 log.go:181] (0xa638070) (3) Data frame sent I1005 10:09:11.090710 10 log.go:181] (0x88ec540) Data frame received for 3 I1005 10:09:11.090778 10 log.go:181] (0xa638070) (3) Data frame handling I1005 10:09:11.091322 10 log.go:181] (0x88ec540) Data frame received for 1 I1005 10:09:11.091439 10 log.go:181] (0x88ec5b0) (1) Data frame handling I1005 10:09:11.091570 10 log.go:181] (0x88ec5b0) (1) Data frame sent I1005 10:09:11.093400 10 log.go:181] (0x88ec540) (0x88ec5b0) Stream removed, broadcasting: 1 I1005 10:09:11.094265 10 log.go:181] (0x88ec540) Go away received I1005 10:09:11.097021 10 log.go:181] (0x88ec540) (0x88ec5b0) Stream removed, broadcasting: 1 I1005 10:09:11.097231 10 log.go:181] (0x88ec540) (0xa638070) Stream removed, broadcasting: 3 I1005 10:09:11.097401 10 log.go:181] (0x88ec540) (0xa638230) Stream removed, broadcasting: 5 Oct 5 10:09:11.098: INFO: Waiting for responses: map[] Oct 5 10:09:11.103: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.248:8080/dial?request=hostname&protocol=http&host=10.244.1.247&port=8080&tries=1'] Namespace:pod-network-test-9527 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 10:09:11.104: INFO: >>> kubeConfig: /root/.kube/config I1005 10:09:11.201546 10 log.go:181] (0x88ed420) (0x88ed490) Create stream I1005 10:09:11.201675 10 log.go:181] (0x88ed420) (0x88ed490) Stream added, broadcasting: 1 I1005 10:09:11.204976 10 log.go:181] (0x88ed420) Reply frame received for 1 I1005 10:09:11.205099 10 log.go:181] (0x88ed420) (0xa564620) Create stream I1005 10:09:11.205166 10 log.go:181] (0x88ed420) (0xa564620) Stream added, broadcasting: 3 I1005 10:09:11.206609 10 log.go:181] (0x88ed420) Reply frame received for 3 I1005 10:09:11.206870 10 log.go:181] (0x88ed420) (0x8a842a0) Create stream I1005 10:09:11.207005 10 log.go:181] (0x88ed420) (0x8a842a0) Stream added, broadcasting: 5 I1005 10:09:11.208620 10 log.go:181] (0x88ed420) Reply frame received for 5 I1005 10:09:11.269553 10 log.go:181] (0x88ed420) Data frame received for 3 I1005 10:09:11.269854 10 log.go:181] (0xa564620) (3) Data frame handling I1005 10:09:11.270024 10 log.go:181] (0x88ed420) Data frame received for 5 I1005 10:09:11.270167 10 log.go:181] (0x8a842a0) (5) Data frame handling I1005 10:09:11.270375 10 log.go:181] (0xa564620) (3) Data frame sent I1005 10:09:11.270530 10 log.go:181] (0x88ed420) Data frame received for 3 I1005 10:09:11.270655 10 log.go:181] (0xa564620) (3) Data frame handling I1005 10:09:11.271682 10 log.go:181] (0x88ed420) Data frame received for 1 I1005 10:09:11.271857 10 log.go:181] (0x88ed490) (1) Data frame handling I1005 10:09:11.272047 10 log.go:181] (0x88ed490) (1) Data frame sent I1005 10:09:11.272222 10 log.go:181] (0x88ed420) (0x88ed490) Stream removed, broadcasting: 1 I1005 10:09:11.272427 10 log.go:181] (0x88ed420) Go away received I1005 10:09:11.272686 10 log.go:181] (0x88ed420) (0x88ed490) Stream removed, broadcasting: 1 I1005 10:09:11.272786 10 log.go:181] (0x88ed420) (0xa564620) Stream removed, broadcasting: 3 I1005 10:09:11.272969 10 log.go:181] (0x88ed420) (0x8a842a0) Stream removed, broadcasting: 5 Oct 5 10:09:11.273: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:09:11.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9527" for this suite. • [SLOW TEST:28.749 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":303,"completed":62,"skipped":1093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:09:11.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override command Oct 5 10:09:11.435: INFO: Waiting up to 5m0s for pod "client-containers-a3f3ad03-9393-40c5-a966-780740cf3e70" in namespace "containers-8572" to be "Succeeded or Failed" Oct 5 10:09:11.438: INFO: Pod "client-containers-a3f3ad03-9393-40c5-a966-780740cf3e70": Phase="Pending", Reason="", readiness=false. Elapsed: 3.317014ms Oct 5 10:09:13.446: INFO: Pod "client-containers-a3f3ad03-9393-40c5-a966-780740cf3e70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011575874s Oct 5 10:09:15.453: INFO: Pod "client-containers-a3f3ad03-9393-40c5-a966-780740cf3e70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018049547s STEP: Saw pod success Oct 5 10:09:15.453: INFO: Pod "client-containers-a3f3ad03-9393-40c5-a966-780740cf3e70" satisfied condition "Succeeded or Failed" Oct 5 10:09:15.466: INFO: Trying to get logs from node kali-worker pod client-containers-a3f3ad03-9393-40c5-a966-780740cf3e70 container test-container: STEP: delete the pod Oct 5 10:09:15.526: INFO: Waiting for pod client-containers-a3f3ad03-9393-40c5-a966-780740cf3e70 to disappear Oct 5 10:09:15.531: INFO: Pod client-containers-a3f3ad03-9393-40c5-a966-780740cf3e70 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:09:15.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8572" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":303,"completed":63,"skipped":1120,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:09:15.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:09:25.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:09:27.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737489365, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737489365, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737489365, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737489365, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:09:30.585: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:09:30.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6741" for this suite. STEP: Destroying namespace "webhook-6741-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.323 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":303,"completed":64,"skipped":1174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:09:30.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name cm-test-opt-del-15483db6-471c-4827-bf70-dae25c3a5c45 STEP: Creating configMap with name cm-test-opt-upd-62ba4c4e-b427-45ec-ac14-59e54533fb02 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-15483db6-471c-4827-bf70-dae25c3a5c45 STEP: Updating configmap cm-test-opt-upd-62ba4c4e-b427-45ec-ac14-59e54533fb02 STEP: Creating configMap with name cm-test-opt-create-c4b3ae2d-d71b-43f3-b574-61756ab6f0a6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:09:39.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-706" for this suite. • [SLOW TEST:8.256 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":65,"skipped":1197,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:09:39.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1415 STEP: creating an pod Oct 5 10:09:39.257: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.20 --namespace=kubectl-175 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Oct 5 10:09:40.510: INFO: stderr: "" Oct 5 10:09:40.511: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Waiting for log generator to start. Oct 5 10:09:40.511: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Oct 5 10:09:40.512: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-175" to be "running and ready, or succeeded" Oct 5 10:09:40.521: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.62416ms Oct 5 10:09:42.584: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072114114s Oct 5 10:09:44.592: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.079531829s Oct 5 10:09:44.592: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Oct 5 10:09:44.592: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Oct 5 10:09:44.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-175' Oct 5 10:09:45.932: INFO: stderr: "" Oct 5 10:09:45.932: INFO: stdout: "I1005 10:09:43.202418 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/h97s 368\nI1005 10:09:43.402529 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/x9vr 433\nI1005 10:09:43.602601 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/87d 421\nI1005 10:09:43.802606 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/wxpn 321\nI1005 10:09:44.002614 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/dp2j 579\nI1005 10:09:44.202527 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/w8n 463\nI1005 10:09:44.402596 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/z98 373\nI1005 10:09:44.602527 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/wg6m 304\nI1005 10:09:44.802534 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/w59p 372\nI1005 10:09:45.002532 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/s5s9 508\nI1005 10:09:45.202513 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/22l 278\nI1005 10:09:45.402576 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/jg4 315\nI1005 10:09:45.602570 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/gvz 249\nI1005 10:09:45.802514 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/j9f 297\n" STEP: limiting log lines Oct 5 10:09:45.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-175 --tail=1' Oct 5 10:09:47.240: INFO: stderr: "" Oct 5 10:09:47.240: INFO: stdout: "I1005 10:09:47.202522 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/wvp 351\n" Oct 5 10:09:47.240: INFO: got output "I1005 10:09:47.202522 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/wvp 351\n" STEP: limiting log bytes Oct 5 10:09:47.241: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-175 --limit-bytes=1' Oct 5 10:09:48.465: INFO: stderr: "" Oct 5 10:09:48.465: INFO: stdout: "I" Oct 5 10:09:48.465: INFO: got output "I" STEP: exposing timestamps Oct 5 10:09:48.466: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-175 --tail=1 --timestamps' Oct 5 10:09:49.762: INFO: stderr: "" Oct 5 10:09:49.762: INFO: stdout: "2020-10-05T10:09:49.602680634Z I1005 10:09:49.602534 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/kube-system/pods/z8j 552\n" Oct 5 10:09:49.762: INFO: got output "2020-10-05T10:09:49.602680634Z I1005 10:09:49.602534 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/kube-system/pods/z8j 552\n" STEP: restricting to a time range Oct 5 10:09:52.266: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-175 --since=1s' Oct 5 10:09:53.556: INFO: stderr: "" Oct 5 10:09:53.556: INFO: stdout: "I1005 10:09:52.602515 1 logs_generator.go:76] 47 GET /api/v1/namespaces/ns/pods/4z5 435\nI1005 10:09:52.802630 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/default/pods/wv4 333\nI1005 10:09:53.002523 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/kube-system/pods/xmsd 487\nI1005 10:09:53.202552 1 logs_generator.go:76] 50 GET /api/v1/namespaces/kube-system/pods/rd2v 287\nI1005 10:09:53.402544 1 logs_generator.go:76] 51 PUT /api/v1/namespaces/default/pods/c4sq 534\n" Oct 5 10:09:53.557: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-175 --since=24h' Oct 5 10:09:54.807: INFO: stderr: "" Oct 5 10:09:54.808: INFO: stdout: "I1005 10:09:43.202418 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/h97s 368\nI1005 10:09:43.402529 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/x9vr 433\nI1005 10:09:43.602601 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/87d 421\nI1005 10:09:43.802606 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/wxpn 321\nI1005 10:09:44.002614 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/dp2j 579\nI1005 10:09:44.202527 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/w8n 463\nI1005 10:09:44.402596 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/z98 373\nI1005 10:09:44.602527 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/wg6m 304\nI1005 10:09:44.802534 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/w59p 372\nI1005 10:09:45.002532 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/s5s9 508\nI1005 10:09:45.202513 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/22l 278\nI1005 10:09:45.402576 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/jg4 315\nI1005 10:09:45.602570 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/gvz 249\nI1005 10:09:45.802514 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/j9f 297\nI1005 10:09:46.002556 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/2g6 445\nI1005 10:09:46.202566 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/9q6s 446\nI1005 10:09:46.402567 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/22f 202\nI1005 10:09:46.602513 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/lbz4 324\nI1005 10:09:46.802545 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/zw8v 438\nI1005 10:09:47.002571 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/t9v 531\nI1005 10:09:47.202522 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/wvp 351\nI1005 10:09:47.402544 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/6fq 254\nI1005 10:09:47.602565 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/88vp 387\nI1005 10:09:47.802536 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/mbs 399\nI1005 10:09:48.002580 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/qm97 262\nI1005 10:09:48.202539 1 logs_generator.go:76] 25 GET /api/v1/namespaces/default/pods/9lj 318\nI1005 10:09:48.402576 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/b7g 348\nI1005 10:09:48.602540 1 logs_generator.go:76] 27 POST /api/v1/namespaces/kube-system/pods/gqtn 238\nI1005 10:09:48.802562 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/cmbp 219\nI1005 10:09:49.002567 1 logs_generator.go:76] 29 POST /api/v1/namespaces/kube-system/pods/6vg 354\nI1005 10:09:49.202531 1 logs_generator.go:76] 30 PUT /api/v1/namespaces/ns/pods/mq4 406\nI1005 10:09:49.402607 1 logs_generator.go:76] 31 PUT /api/v1/namespaces/kube-system/pods/z8f 333\nI1005 10:09:49.602534 1 logs_generator.go:76] 32 PUT /api/v1/namespaces/kube-system/pods/z8j 552\nI1005 10:09:49.802540 1 logs_generator.go:76] 33 PUT /api/v1/namespaces/ns/pods/jmhc 403\nI1005 10:09:50.002606 1 logs_generator.go:76] 34 GET /api/v1/namespaces/default/pods/w7h 577\nI1005 10:09:50.202623 1 logs_generator.go:76] 35 PUT /api/v1/namespaces/kube-system/pods/kvq 545\nI1005 10:09:50.402609 1 logs_generator.go:76] 36 PUT /api/v1/namespaces/ns/pods/rss6 548\nI1005 10:09:50.602543 1 logs_generator.go:76] 37 GET /api/v1/namespaces/kube-system/pods/z9kc 452\nI1005 10:09:50.802629 1 logs_generator.go:76] 38 GET /api/v1/namespaces/default/pods/7x7l 271\nI1005 10:09:51.002620 1 logs_generator.go:76] 39 PUT /api/v1/namespaces/kube-system/pods/wxg9 222\nI1005 10:09:51.202629 1 logs_generator.go:76] 40 POST /api/v1/namespaces/default/pods/s8lg 270\nI1005 10:09:51.402612 1 logs_generator.go:76] 41 PUT /api/v1/namespaces/default/pods/55r 251\nI1005 10:09:51.602631 1 logs_generator.go:76] 42 POST /api/v1/namespaces/ns/pods/fzk 283\nI1005 10:09:51.802668 1 logs_generator.go:76] 43 GET /api/v1/namespaces/ns/pods/l4rz 590\nI1005 10:09:52.002584 1 logs_generator.go:76] 44 GET /api/v1/namespaces/default/pods/vwqj 292\nI1005 10:09:52.202588 1 logs_generator.go:76] 45 POST /api/v1/namespaces/kube-system/pods/hr5 496\nI1005 10:09:52.402577 1 logs_generator.go:76] 46 POST /api/v1/namespaces/default/pods/z7zf 431\nI1005 10:09:52.602515 1 logs_generator.go:76] 47 GET /api/v1/namespaces/ns/pods/4z5 435\nI1005 10:09:52.802630 1 logs_generator.go:76] 48 PUT /api/v1/namespaces/default/pods/wv4 333\nI1005 10:09:53.002523 1 logs_generator.go:76] 49 PUT /api/v1/namespaces/kube-system/pods/xmsd 487\nI1005 10:09:53.202552 1 logs_generator.go:76] 50 GET /api/v1/namespaces/kube-system/pods/rd2v 287\nI1005 10:09:53.402544 1 logs_generator.go:76] 51 PUT /api/v1/namespaces/default/pods/c4sq 534\nI1005 10:09:53.602594 1 logs_generator.go:76] 52 GET /api/v1/namespaces/default/pods/dgvg 224\nI1005 10:09:53.802540 1 logs_generator.go:76] 53 GET /api/v1/namespaces/default/pods/lzzm 544\nI1005 10:09:54.002619 1 logs_generator.go:76] 54 PUT /api/v1/namespaces/ns/pods/gkwg 285\nI1005 10:09:54.202559 1 logs_generator.go:76] 55 POST /api/v1/namespaces/default/pods/g29h 566\nI1005 10:09:54.402561 1 logs_generator.go:76] 56 POST /api/v1/namespaces/kube-system/pods/hdlt 367\nI1005 10:09:54.602555 1 logs_generator.go:76] 57 POST /api/v1/namespaces/kube-system/pods/hz7p 441\n" [AfterEach] Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1421 Oct 5 10:09:54.810: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-175' Oct 5 10:09:58.712: INFO: stderr: "" Oct 5 10:09:58.712: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:09:58.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-175" for this suite. • [SLOW TEST:19.565 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":303,"completed":66,"skipped":1212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:09:58.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 5 10:10:10.175: INFO: starting watch STEP: patching STEP: updating Oct 5 10:10:10.198: INFO: waiting for watch events with expected annotations Oct 5 10:10:10.199: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:10:10.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-4912" for this suite. • [SLOW TEST:11.620 seconds] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should support CSR API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":303,"completed":67,"skipped":1267,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:10:10.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-7501 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7501 STEP: Creating statefulset with conflicting port in namespace statefulset-7501 STEP: Waiting until pod test-pod will start running in namespace statefulset-7501 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7501 Oct 5 10:10:16.564: INFO: Observed stateful pod in namespace: statefulset-7501, name: ss-0, uid: 3161e06a-c691-4add-8316-71d98addd405, status phase: Pending. Waiting for statefulset controller to delete. Oct 5 10:10:17.117: INFO: Observed stateful pod in namespace: statefulset-7501, name: ss-0, uid: 3161e06a-c691-4add-8316-71d98addd405, status phase: Failed. Waiting for statefulset controller to delete. Oct 5 10:10:17.174: INFO: Observed stateful pod in namespace: statefulset-7501, name: ss-0, uid: 3161e06a-c691-4add-8316-71d98addd405, status phase: Failed. Waiting for statefulset controller to delete. Oct 5 10:10:17.197: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7501 STEP: Removing pod with conflicting port in namespace statefulset-7501 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7501 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 10:10:23.343: INFO: Deleting all statefulset in ns statefulset-7501 Oct 5 10:10:23.347: INFO: Scaling statefulset ss to 0 Oct 5 10:10:33.406: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 10:10:33.411: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:10:33.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7501" for this suite. • [SLOW TEST:23.107 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":303,"completed":68,"skipped":1297,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:10:33.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 5 10:10:37.612: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:10:37.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1939" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":303,"completed":69,"skipped":1305,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:10:37.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-15a234f0-3149-4e0b-9f3a-508c35959613 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:10:43.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4287" for this suite. • [SLOW TEST:6.219 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":70,"skipped":1323,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:10:43.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Oct 5 10:10:51.477: INFO: 10 pods remaining Oct 5 10:10:51.477: INFO: 3 pods has nil DeletionTimestamp Oct 5 10:10:51.477: INFO: Oct 5 10:10:53.107: INFO: 0 pods remaining Oct 5 10:10:53.107: INFO: 0 pods has nil DeletionTimestamp Oct 5 10:10:53.107: INFO: Oct 5 10:10:54.643: INFO: 0 pods remaining Oct 5 10:10:54.644: INFO: 0 pods has nil DeletionTimestamp Oct 5 10:10:54.644: INFO: STEP: Gathering metrics W1005 10:10:56.307927 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 10:11:58.338: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:11:58.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7387" for this suite. • [SLOW TEST:74.448 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":303,"completed":71,"skipped":1325,"failed":0} [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:11:58.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:11:58.489: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:12:00.544: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:12:02.497: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:04.497: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:06.497: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:08.498: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:10.497: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:12.498: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:14.497: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:16.497: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:18.497: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = false) Oct 5 10:12:20.497: INFO: The status of Pod test-webserver-36e1ef63-1c22-41d0-87d6-4ee8b32bcbdd is Running (Ready = true) Oct 5 10:12:20.503: INFO: Container started at 2020-10-05 10:12:01 +0000 UTC, pod became ready at 2020-10-05 10:12:19 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:12:20.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8859" for this suite. • [SLOW TEST:22.162 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":303,"completed":72,"skipped":1325,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:12:20.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 [It] should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 5 10:12:20.605: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2839' Oct 5 10:12:21.973: INFO: stderr: "" Oct 5 10:12:21.973: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1550 Oct 5 10:12:21.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2839' Oct 5 10:12:28.182: INFO: stderr: "" Oct 5 10:12:28.182: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:12:28.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2839" for this suite. • [SLOW TEST:7.678 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 should create a pod from an image when restart is Never [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":303,"completed":73,"skipped":1338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:12:28.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-af25c977-f76d-4af7-8d2e-7f179df60b7a STEP: Creating a pod to test consume secrets Oct 5 10:12:28.341: INFO: Waiting up to 5m0s for pod "pod-secrets-fd1a08e7-c5d0-468a-aa49-a7ba9dc67c9b" in namespace "secrets-2918" to be "Succeeded or Failed" Oct 5 10:12:28.395: INFO: Pod "pod-secrets-fd1a08e7-c5d0-468a-aa49-a7ba9dc67c9b": Phase="Pending", Reason="", readiness=false. Elapsed: 54.027759ms Oct 5 10:12:30.414: INFO: Pod "pod-secrets-fd1a08e7-c5d0-468a-aa49-a7ba9dc67c9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072270701s Oct 5 10:12:32.422: INFO: Pod "pod-secrets-fd1a08e7-c5d0-468a-aa49-a7ba9dc67c9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08069942s STEP: Saw pod success Oct 5 10:12:32.422: INFO: Pod "pod-secrets-fd1a08e7-c5d0-468a-aa49-a7ba9dc67c9b" satisfied condition "Succeeded or Failed" Oct 5 10:12:32.427: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-fd1a08e7-c5d0-468a-aa49-a7ba9dc67c9b container secret-volume-test: STEP: delete the pod Oct 5 10:12:32.516: INFO: Waiting for pod pod-secrets-fd1a08e7-c5d0-468a-aa49-a7ba9dc67c9b to disappear Oct 5 10:12:32.532: INFO: Pod pod-secrets-fd1a08e7-c5d0-468a-aa49-a7ba9dc67c9b no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:12:32.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2918" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":74,"skipped":1439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:12:32.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:12:32.647: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:12:39.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-790" for this suite. • [SLOW TEST:6.639 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":303,"completed":75,"skipped":1475,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:12:39.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-2720 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2720 STEP: creating replication controller externalsvc in namespace services-2720 I1005 10:12:39.473009 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-2720, replica count: 2 I1005 10:12:42.524771 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 10:12:45.526090 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Oct 5 10:12:45.613: INFO: Creating new exec pod Oct 5 10:12:49.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2720 execpodngwhq -- /bin/sh -x -c nslookup clusterip-service.services-2720.svc.cluster.local' Oct 5 10:12:51.208: INFO: stderr: "I1005 10:12:51.055653 1545 log.go:181] (0x247c700) (0x247c850) Create stream\nI1005 10:12:51.058334 1545 log.go:181] (0x247c700) (0x247c850) Stream added, broadcasting: 1\nI1005 10:12:51.068311 1545 log.go:181] (0x247c700) Reply frame received for 1\nI1005 10:12:51.068752 1545 log.go:181] (0x247c700) (0x25cc0e0) Create stream\nI1005 10:12:51.068815 1545 log.go:181] (0x247c700) (0x25cc0e0) Stream added, broadcasting: 3\nI1005 10:12:51.070480 1545 log.go:181] (0x247c700) Reply frame received for 3\nI1005 10:12:51.070952 1545 log.go:181] (0x247c700) (0x26eec40) Create stream\nI1005 10:12:51.071077 1545 log.go:181] (0x247c700) (0x26eec40) Stream added, broadcasting: 5\nI1005 10:12:51.073099 1545 log.go:181] (0x247c700) Reply frame received for 5\nI1005 10:12:51.164225 1545 log.go:181] (0x247c700) Data frame received for 5\nI1005 10:12:51.164572 1545 log.go:181] (0x26eec40) (5) Data frame handling\nI1005 10:12:51.165237 1545 log.go:181] (0x26eec40) (5) Data frame sent\n+ nslookup clusterip-service.services-2720.svc.cluster.local\nI1005 10:12:51.189319 1545 log.go:181] (0x247c700) Data frame received for 3\nI1005 10:12:51.189460 1545 log.go:181] (0x25cc0e0) (3) Data frame handling\nI1005 10:12:51.189652 1545 log.go:181] (0x25cc0e0) (3) Data frame sent\nI1005 10:12:51.191007 1545 log.go:181] (0x247c700) Data frame received for 3\nI1005 10:12:51.191261 1545 log.go:181] (0x25cc0e0) (3) Data frame handling\nI1005 10:12:51.191515 1545 log.go:181] (0x25cc0e0) (3) Data frame sent\nI1005 10:12:51.191686 1545 log.go:181] (0x247c700) Data frame received for 3\nI1005 10:12:51.191805 1545 log.go:181] (0x247c700) Data frame received for 5\nI1005 10:12:51.192019 1545 log.go:181] (0x26eec40) (5) Data frame handling\nI1005 10:12:51.192241 1545 log.go:181] (0x25cc0e0) (3) Data frame handling\nI1005 10:12:51.193890 1545 log.go:181] (0x247c700) Data frame received for 1\nI1005 10:12:51.194039 1545 log.go:181] (0x247c850) (1) Data frame handling\nI1005 10:12:51.194209 1545 log.go:181] (0x247c850) (1) Data frame sent\nI1005 10:12:51.195646 1545 log.go:181] (0x247c700) (0x247c850) Stream removed, broadcasting: 1\nI1005 10:12:51.198537 1545 log.go:181] (0x247c700) (0x247c850) Stream removed, broadcasting: 1\nI1005 10:12:51.198751 1545 log.go:181] (0x247c700) (0x25cc0e0) Stream removed, broadcasting: 3\nI1005 10:12:51.199791 1545 log.go:181] (0x247c700) (0x26eec40) Stream removed, broadcasting: 5\n" Oct 5 10:12:51.209: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-2720.svc.cluster.local\tcanonical name = externalsvc.services-2720.svc.cluster.local.\nName:\texternalsvc.services-2720.svc.cluster.local\nAddress: 10.110.186.177\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2720, will wait for the garbage collector to delete the pods Oct 5 10:12:51.275: INFO: Deleting ReplicationController externalsvc took: 9.470944ms Oct 5 10:12:51.376: INFO: Terminating ReplicationController externalsvc pods took: 100.712551ms Oct 5 10:12:58.724: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:12:58.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2720" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:19.563 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":303,"completed":76,"skipped":1477,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:12:58.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-3a313cc0-d6db-46df-9cb3-8f6ff2c10116 STEP: Creating a pod to test consume secrets Oct 5 10:12:58.870: INFO: Waiting up to 5m0s for pod "pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe" in namespace "secrets-8021" to be "Succeeded or Failed" Oct 5 10:12:58.881: INFO: Pod "pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe": Phase="Pending", Reason="", readiness=false. Elapsed: 10.322186ms Oct 5 10:13:00.889: INFO: Pod "pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018699409s Oct 5 10:13:02.897: INFO: Pod "pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe": Phase="Running", Reason="", readiness=true. Elapsed: 4.026510521s Oct 5 10:13:04.905: INFO: Pod "pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.034225381s STEP: Saw pod success Oct 5 10:13:04.905: INFO: Pod "pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe" satisfied condition "Succeeded or Failed" Oct 5 10:13:04.910: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe container secret-volume-test: STEP: delete the pod Oct 5 10:13:05.009: INFO: Waiting for pod pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe to disappear Oct 5 10:13:05.020: INFO: Pod pod-secrets-3839d101-340b-4f0d-a6a7-524dc58861fe no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:13:05.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8021" for this suite. • [SLOW TEST:6.277 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":77,"skipped":1482,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:13:05.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:13:05.190: INFO: Waiting up to 5m0s for pod "downwardapi-volume-90107b17-49be-43f3-8721-7ff4620b2dcd" in namespace "downward-api-3096" to be "Succeeded or Failed" Oct 5 10:13:05.245: INFO: Pod "downwardapi-volume-90107b17-49be-43f3-8721-7ff4620b2dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 54.168997ms Oct 5 10:13:07.252: INFO: Pod "downwardapi-volume-90107b17-49be-43f3-8721-7ff4620b2dcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061330666s Oct 5 10:13:09.258: INFO: Pod "downwardapi-volume-90107b17-49be-43f3-8721-7ff4620b2dcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06791498s STEP: Saw pod success Oct 5 10:13:09.259: INFO: Pod "downwardapi-volume-90107b17-49be-43f3-8721-7ff4620b2dcd" satisfied condition "Succeeded or Failed" Oct 5 10:13:09.411: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-90107b17-49be-43f3-8721-7ff4620b2dcd container client-container: STEP: delete the pod Oct 5 10:13:09.618: INFO: Waiting for pod downwardapi-volume-90107b17-49be-43f3-8721-7ff4620b2dcd to disappear Oct 5 10:13:09.643: INFO: Pod downwardapi-volume-90107b17-49be-43f3-8721-7ff4620b2dcd no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:13:09.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3096" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":78,"skipped":1488,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] PodTemplates should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:13:09.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pod templates Oct 5 10:13:09.793: INFO: created test-podtemplate-1 Oct 5 10:13:09.880: INFO: created test-podtemplate-2 Oct 5 10:13:09.887: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates Oct 5 10:13:09.908: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity Oct 5 10:13:09.932: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:13:09.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-5950" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":303,"completed":79,"skipped":1504,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:13:09.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:13:10.045: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:13:14.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6603" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":303,"completed":80,"skipped":1537,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:13:14.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 5 10:13:20.920: INFO: Successfully updated pod "annotationupdate13081dfe-611a-497b-9d24-1f27118fa256" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:13:22.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9679" for this suite. • [SLOW TEST:8.745 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":81,"skipped":1560,"failed":0} [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:13:23.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 5 10:13:23.122: INFO: Waiting up to 5m0s for pod "pod-0af277e3-a2ad-4604-bd64-14e835aa1322" in namespace "emptydir-3033" to be "Succeeded or Failed" Oct 5 10:13:23.164: INFO: Pod "pod-0af277e3-a2ad-4604-bd64-14e835aa1322": Phase="Pending", Reason="", readiness=false. Elapsed: 41.30561ms Oct 5 10:13:25.171: INFO: Pod "pod-0af277e3-a2ad-4604-bd64-14e835aa1322": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049188609s Oct 5 10:13:27.547: INFO: Pod "pod-0af277e3-a2ad-4604-bd64-14e835aa1322": Phase="Pending", Reason="", readiness=false. Elapsed: 4.42511868s Oct 5 10:13:29.770: INFO: Pod "pod-0af277e3-a2ad-4604-bd64-14e835aa1322": Phase="Running", Reason="", readiness=true. Elapsed: 6.64790248s Oct 5 10:13:31.803: INFO: Pod "pod-0af277e3-a2ad-4604-bd64-14e835aa1322": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.68065078s STEP: Saw pod success Oct 5 10:13:31.803: INFO: Pod "pod-0af277e3-a2ad-4604-bd64-14e835aa1322" satisfied condition "Succeeded or Failed" Oct 5 10:13:31.807: INFO: Trying to get logs from node kali-worker2 pod pod-0af277e3-a2ad-4604-bd64-14e835aa1322 container test-container: STEP: delete the pod Oct 5 10:13:32.300: INFO: Waiting for pod pod-0af277e3-a2ad-4604-bd64-14e835aa1322 to disappear Oct 5 10:13:32.313: INFO: Pod pod-0af277e3-a2ad-4604-bd64-14e835aa1322 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:13:32.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3033" for this suite. • [SLOW TEST:9.327 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":82,"skipped":1560,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:13:32.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-4624 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 5 10:13:33.688: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 5 10:13:33.817: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:13:35.868: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:13:37.965: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:13:39.823: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:13:41.922: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:13:43.845: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:13:45.983: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:13:47.829: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:13:49.827: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:13:51.824: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:13:53.841: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 5 10:13:53.854: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 5 10:13:55.861: INFO: The status of Pod netserver-1 is Running (Ready = false) Oct 5 10:13:57.861: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 5 10:14:03.965: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.17:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4624 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 10:14:03.965: INFO: >>> kubeConfig: /root/.kube/config I1005 10:14:04.068197 10 log.go:181] (0xb3e42a0) (0xb3e4380) Create stream I1005 10:14:04.068340 10 log.go:181] (0xb3e42a0) (0xb3e4380) Stream added, broadcasting: 1 I1005 10:14:04.072490 10 log.go:181] (0xb3e42a0) Reply frame received for 1 I1005 10:14:04.072615 10 log.go:181] (0xb3e42a0) (0xaee2380) Create stream I1005 10:14:04.072682 10 log.go:181] (0xb3e42a0) (0xaee2380) Stream added, broadcasting: 3 I1005 10:14:04.074012 10 log.go:181] (0xb3e42a0) Reply frame received for 3 I1005 10:14:04.074150 10 log.go:181] (0xb3e42a0) (0xb3e4700) Create stream I1005 10:14:04.074214 10 log.go:181] (0xb3e42a0) (0xb3e4700) Stream added, broadcasting: 5 I1005 10:14:04.075307 10 log.go:181] (0xb3e42a0) Reply frame received for 5 I1005 10:14:04.143219 10 log.go:181] (0xb3e42a0) Data frame received for 3 I1005 10:14:04.143424 10 log.go:181] (0xaee2380) (3) Data frame handling I1005 10:14:04.143586 10 log.go:181] (0xb3e42a0) Data frame received for 5 I1005 10:14:04.143810 10 log.go:181] (0xb3e4700) (5) Data frame handling I1005 10:14:04.143943 10 log.go:181] (0xaee2380) (3) Data frame sent I1005 10:14:04.144053 10 log.go:181] (0xb3e42a0) Data frame received for 3 I1005 10:14:04.144164 10 log.go:181] (0xaee2380) (3) Data frame handling I1005 10:14:04.146096 10 log.go:181] (0xb3e42a0) Data frame received for 1 I1005 10:14:04.146198 10 log.go:181] (0xb3e4380) (1) Data frame handling I1005 10:14:04.146293 10 log.go:181] (0xb3e4380) (1) Data frame sent I1005 10:14:04.146373 10 log.go:181] (0xb3e42a0) (0xb3e4380) Stream removed, broadcasting: 1 I1005 10:14:04.146531 10 log.go:181] (0xb3e42a0) Go away received I1005 10:14:04.146901 10 log.go:181] (0xb3e42a0) (0xb3e4380) Stream removed, broadcasting: 1 I1005 10:14:04.147067 10 log.go:181] (0xb3e42a0) (0xaee2380) Stream removed, broadcasting: 3 I1005 10:14:04.147166 10 log.go:181] (0xb3e42a0) (0xb3e4700) Stream removed, broadcasting: 5 Oct 5 10:14:04.147: INFO: Found all expected endpoints: [netserver-0] Oct 5 10:14:04.152: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.9:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4624 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 10:14:04.152: INFO: >>> kubeConfig: /root/.kube/config I1005 10:14:04.249289 10 log.go:181] (0xb7f2e70) (0xb7f2ee0) Create stream I1005 10:14:04.249451 10 log.go:181] (0xb7f2e70) (0xb7f2ee0) Stream added, broadcasting: 1 I1005 10:14:04.253136 10 log.go:181] (0xb7f2e70) Reply frame received for 1 I1005 10:14:04.253282 10 log.go:181] (0xb7f2e70) (0xb3e4e00) Create stream I1005 10:14:04.253348 10 log.go:181] (0xb7f2e70) (0xb3e4e00) Stream added, broadcasting: 3 I1005 10:14:04.254462 10 log.go:181] (0xb7f2e70) Reply frame received for 3 I1005 10:14:04.254587 10 log.go:181] (0xb7f2e70) (0xb7f31f0) Create stream I1005 10:14:04.254652 10 log.go:181] (0xb7f2e70) (0xb7f31f0) Stream added, broadcasting: 5 I1005 10:14:04.256156 10 log.go:181] (0xb7f2e70) Reply frame received for 5 I1005 10:14:04.326778 10 log.go:181] (0xb7f2e70) Data frame received for 3 I1005 10:14:04.326985 10 log.go:181] (0xb3e4e00) (3) Data frame handling I1005 10:14:04.327101 10 log.go:181] (0xb3e4e00) (3) Data frame sent I1005 10:14:04.327207 10 log.go:181] (0xb7f2e70) Data frame received for 3 I1005 10:14:04.327347 10 log.go:181] (0xb3e4e00) (3) Data frame handling I1005 10:14:04.327500 10 log.go:181] (0xb7f2e70) Data frame received for 5 I1005 10:14:04.327690 10 log.go:181] (0xb7f31f0) (5) Data frame handling I1005 10:14:04.327887 10 log.go:181] (0xb7f2e70) Data frame received for 1 I1005 10:14:04.327992 10 log.go:181] (0xb7f2ee0) (1) Data frame handling I1005 10:14:04.328110 10 log.go:181] (0xb7f2ee0) (1) Data frame sent I1005 10:14:04.328224 10 log.go:181] (0xb7f2e70) (0xb7f2ee0) Stream removed, broadcasting: 1 I1005 10:14:04.328665 10 log.go:181] (0xb7f2e70) (0xb7f2ee0) Stream removed, broadcasting: 1 I1005 10:14:04.328799 10 log.go:181] (0xb7f2e70) (0xb3e4e00) Stream removed, broadcasting: 3 I1005 10:14:04.328961 10 log.go:181] (0xb7f2e70) (0xb7f31f0) Stream removed, broadcasting: 5 I1005 10:14:04.329059 10 log.go:181] (0xb7f2e70) Go away received Oct 5 10:14:04.329: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:14:04.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4624" for this suite. • [SLOW TEST:31.987 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":83,"skipped":1582,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:14:04.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Oct 5 10:14:04.443: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Oct 5 10:15:26.742: INFO: >>> kubeConfig: /root/.kube/config Oct 5 10:15:47.331: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:16:59.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5865" for this suite. • [SLOW TEST:175.520 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":303,"completed":84,"skipped":1597,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:16:59.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:16:59.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2970" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":303,"completed":85,"skipped":1612,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:16:59.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:17:00.102: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Oct 5 10:17:00.117: INFO: Number of nodes with available pods: 0 Oct 5 10:17:00.117: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Oct 5 10:17:00.206: INFO: Number of nodes with available pods: 0 Oct 5 10:17:00.206: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:01.214: INFO: Number of nodes with available pods: 0 Oct 5 10:17:01.214: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:02.257: INFO: Number of nodes with available pods: 0 Oct 5 10:17:02.257: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:03.215: INFO: Number of nodes with available pods: 0 Oct 5 10:17:03.216: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:04.213: INFO: Number of nodes with available pods: 1 Oct 5 10:17:04.213: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Oct 5 10:17:04.252: INFO: Number of nodes with available pods: 1 Oct 5 10:17:04.252: INFO: Number of running nodes: 0, number of available pods: 1 Oct 5 10:17:05.274: INFO: Number of nodes with available pods: 0 Oct 5 10:17:05.274: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Oct 5 10:17:05.306: INFO: Number of nodes with available pods: 0 Oct 5 10:17:05.306: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:06.332: INFO: Number of nodes with available pods: 0 Oct 5 10:17:06.332: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:07.314: INFO: Number of nodes with available pods: 0 Oct 5 10:17:07.314: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:08.314: INFO: Number of nodes with available pods: 0 Oct 5 10:17:08.315: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:09.315: INFO: Number of nodes with available pods: 0 Oct 5 10:17:09.315: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:10.328: INFO: Number of nodes with available pods: 0 Oct 5 10:17:10.328: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:11.316: INFO: Number of nodes with available pods: 0 Oct 5 10:17:11.316: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:12.316: INFO: Number of nodes with available pods: 0 Oct 5 10:17:12.316: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:13.317: INFO: Number of nodes with available pods: 0 Oct 5 10:17:13.317: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:14.314: INFO: Number of nodes with available pods: 0 Oct 5 10:17:14.314: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:15.313: INFO: Number of nodes with available pods: 0 Oct 5 10:17:15.314: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:16.315: INFO: Number of nodes with available pods: 0 Oct 5 10:17:16.315: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:17.315: INFO: Number of nodes with available pods: 0 Oct 5 10:17:17.315: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:18.328: INFO: Number of nodes with available pods: 0 Oct 5 10:17:18.329: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:19.312: INFO: Number of nodes with available pods: 0 Oct 5 10:17:19.313: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:20.313: INFO: Number of nodes with available pods: 0 Oct 5 10:17:20.313: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:21.315: INFO: Number of nodes with available pods: 0 Oct 5 10:17:21.315: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 10:17:22.315: INFO: Number of nodes with available pods: 1 Oct 5 10:17:22.315: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4949, will wait for the garbage collector to delete the pods Oct 5 10:17:22.391: INFO: Deleting DaemonSet.extensions daemon-set took: 9.249179ms Oct 5 10:17:22.792: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.822999ms Oct 5 10:17:28.198: INFO: Number of nodes with available pods: 0 Oct 5 10:17:28.198: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 10:17:28.203: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4949/daemonsets","resourceVersion":"3158796"},"items":null} Oct 5 10:17:28.207: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4949/pods","resourceVersion":"3158796"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:17:28.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4949" for this suite. • [SLOW TEST:28.292 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":303,"completed":86,"skipped":1627,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:17:28.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:18:01.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-122" for this suite. • [SLOW TEST:33.367 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 when starting a container that exits /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":303,"completed":87,"skipped":1630,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:18:01.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-8206, will wait for the garbage collector to delete the pods Oct 5 10:18:08.072: INFO: Deleting Job.batch foo took: 11.486395ms Oct 5 10:18:08.473: INFO: Terminating Job.batch foo pods took: 401.03387ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:18:48.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8206" for this suite. • [SLOW TEST:47.155 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":303,"completed":88,"skipped":1674,"failed":0} SSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:18:48.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:18:48.877: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fdad21e-a8b7-46ed-9be7-f7d4282321cc" in namespace "downward-api-6424" to be "Succeeded or Failed" Oct 5 10:18:48.928: INFO: Pod "downwardapi-volume-6fdad21e-a8b7-46ed-9be7-f7d4282321cc": Phase="Pending", Reason="", readiness=false. Elapsed: 51.303546ms Oct 5 10:18:50.936: INFO: Pod "downwardapi-volume-6fdad21e-a8b7-46ed-9be7-f7d4282321cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05884958s Oct 5 10:18:52.944: INFO: Pod "downwardapi-volume-6fdad21e-a8b7-46ed-9be7-f7d4282321cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067102508s STEP: Saw pod success Oct 5 10:18:52.944: INFO: Pod "downwardapi-volume-6fdad21e-a8b7-46ed-9be7-f7d4282321cc" satisfied condition "Succeeded or Failed" Oct 5 10:18:52.951: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-6fdad21e-a8b7-46ed-9be7-f7d4282321cc container client-container: STEP: delete the pod Oct 5 10:18:53.103: INFO: Waiting for pod downwardapi-volume-6fdad21e-a8b7-46ed-9be7-f7d4282321cc to disappear Oct 5 10:18:53.123: INFO: Pod downwardapi-volume-6fdad21e-a8b7-46ed-9be7-f7d4282321cc no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:18:53.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6424" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":89,"skipped":1681,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:18:53.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 5 10:18:53.237: INFO: Waiting up to 5m0s for pod "pod-9c7d47ee-5377-44b4-bb50-9bcb1b691c0e" in namespace "emptydir-6723" to be "Succeeded or Failed" Oct 5 10:18:53.269: INFO: Pod "pod-9c7d47ee-5377-44b4-bb50-9bcb1b691c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.392739ms Oct 5 10:18:55.276: INFO: Pod "pod-9c7d47ee-5377-44b4-bb50-9bcb1b691c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038631537s Oct 5 10:18:57.294: INFO: Pod "pod-9c7d47ee-5377-44b4-bb50-9bcb1b691c0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056376363s STEP: Saw pod success Oct 5 10:18:57.294: INFO: Pod "pod-9c7d47ee-5377-44b4-bb50-9bcb1b691c0e" satisfied condition "Succeeded or Failed" Oct 5 10:18:57.303: INFO: Trying to get logs from node kali-worker2 pod pod-9c7d47ee-5377-44b4-bb50-9bcb1b691c0e container test-container: STEP: delete the pod Oct 5 10:18:57.365: INFO: Waiting for pod pod-9c7d47ee-5377-44b4-bb50-9bcb1b691c0e to disappear Oct 5 10:18:57.376: INFO: Pod pod-9c7d47ee-5377-44b4-bb50-9bcb1b691c0e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:18:57.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6723" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":90,"skipped":1704,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:18:57.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 5 10:18:57.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2535' Oct 5 10:19:02.966: INFO: stderr: "" Oct 5 10:19:02.967: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 5 10:19:02.968: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2535' Oct 5 10:19:04.287: INFO: stderr: "" Oct 5 10:19:04.287: INFO: stdout: "update-demo-nautilus-hblld update-demo-nautilus-qhsfd " Oct 5 10:19:04.288: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hblld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2535' Oct 5 10:19:05.663: INFO: stderr: "" Oct 5 10:19:05.663: INFO: stdout: "" Oct 5 10:19:05.664: INFO: update-demo-nautilus-hblld is created but not running Oct 5 10:19:10.665: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2535' Oct 5 10:19:11.989: INFO: stderr: "" Oct 5 10:19:11.989: INFO: stdout: "update-demo-nautilus-hblld update-demo-nautilus-qhsfd " Oct 5 10:19:11.990: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hblld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2535' Oct 5 10:19:13.250: INFO: stderr: "" Oct 5 10:19:13.250: INFO: stdout: "true" Oct 5 10:19:13.251: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hblld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2535' Oct 5 10:19:14.550: INFO: stderr: "" Oct 5 10:19:14.551: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 10:19:14.551: INFO: validating pod update-demo-nautilus-hblld Oct 5 10:19:14.565: INFO: got data: { "image": "nautilus.jpg" } Oct 5 10:19:14.566: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 10:19:14.566: INFO: update-demo-nautilus-hblld is verified up and running Oct 5 10:19:14.567: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qhsfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2535' Oct 5 10:19:15.803: INFO: stderr: "" Oct 5 10:19:15.803: INFO: stdout: "true" Oct 5 10:19:15.804: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qhsfd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2535' Oct 5 10:19:17.001: INFO: stderr: "" Oct 5 10:19:17.001: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 10:19:17.001: INFO: validating pod update-demo-nautilus-qhsfd Oct 5 10:19:17.040: INFO: got data: { "image": "nautilus.jpg" } Oct 5 10:19:17.040: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 10:19:17.040: INFO: update-demo-nautilus-qhsfd is verified up and running STEP: using delete to clean up resources Oct 5 10:19:17.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2535' Oct 5 10:19:18.249: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 10:19:18.250: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 5 10:19:18.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2535' Oct 5 10:19:19.559: INFO: stderr: "No resources found in kubectl-2535 namespace.\n" Oct 5 10:19:19.559: INFO: stdout: "" Oct 5 10:19:19.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2535 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 5 10:19:20.864: INFO: stderr: "" Oct 5 10:19:20.864: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:19:20.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2535" for this suite. • [SLOW TEST:23.490 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should create and stop a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":303,"completed":91,"skipped":1714,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:19:20.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:19:20.977: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-231fdf4e-2cbf-4633-8002-91c52ef3ae63" in namespace "security-context-test-2665" to be "Succeeded or Failed" Oct 5 10:19:20.988: INFO: Pod "busybox-privileged-false-231fdf4e-2cbf-4633-8002-91c52ef3ae63": Phase="Pending", Reason="", readiness=false. Elapsed: 10.499152ms Oct 5 10:19:22.996: INFO: Pod "busybox-privileged-false-231fdf4e-2cbf-4633-8002-91c52ef3ae63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018826257s Oct 5 10:19:25.004: INFO: Pod "busybox-privileged-false-231fdf4e-2cbf-4633-8002-91c52ef3ae63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026524258s Oct 5 10:19:25.004: INFO: Pod "busybox-privileged-false-231fdf4e-2cbf-4633-8002-91c52ef3ae63" satisfied condition "Succeeded or Failed" Oct 5 10:19:25.027: INFO: Got logs for pod "busybox-privileged-false-231fdf4e-2cbf-4633-8002-91c52ef3ae63": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:19:25.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2665" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":92,"skipped":1721,"failed":0} ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:19:25.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 10:19:25.134: INFO: Waiting up to 5m0s for pod "downward-api-9c52b1d0-4e6e-454e-9b90-0e4fbab14423" in namespace "downward-api-6017" to be "Succeeded or Failed" Oct 5 10:19:25.140: INFO: Pod "downward-api-9c52b1d0-4e6e-454e-9b90-0e4fbab14423": Phase="Pending", Reason="", readiness=false. Elapsed: 6.0769ms Oct 5 10:19:27.147: INFO: Pod "downward-api-9c52b1d0-4e6e-454e-9b90-0e4fbab14423": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013291115s Oct 5 10:19:29.156: INFO: Pod "downward-api-9c52b1d0-4e6e-454e-9b90-0e4fbab14423": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021554442s STEP: Saw pod success Oct 5 10:19:29.156: INFO: Pod "downward-api-9c52b1d0-4e6e-454e-9b90-0e4fbab14423" satisfied condition "Succeeded or Failed" Oct 5 10:19:29.161: INFO: Trying to get logs from node kali-worker pod downward-api-9c52b1d0-4e6e-454e-9b90-0e4fbab14423 container dapi-container: STEP: delete the pod Oct 5 10:19:29.222: INFO: Waiting for pod downward-api-9c52b1d0-4e6e-454e-9b90-0e4fbab14423 to disappear Oct 5 10:19:29.228: INFO: Pod downward-api-9c52b1d0-4e6e-454e-9b90-0e4fbab14423 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:19:29.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6017" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":303,"completed":93,"skipped":1721,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:19:29.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:308 [It] should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a replication controller Oct 5 10:19:29.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4545' Oct 5 10:19:31.873: INFO: stderr: "" Oct 5 10:19:31.873: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 5 10:19:31.873: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4545' Oct 5 10:19:33.155: INFO: stderr: "" Oct 5 10:19:33.155: INFO: stdout: "update-demo-nautilus-mqdw8 update-demo-nautilus-wl88q " Oct 5 10:19:33.156: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqdw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:19:34.441: INFO: stderr: "" Oct 5 10:19:34.441: INFO: stdout: "" Oct 5 10:19:34.441: INFO: update-demo-nautilus-mqdw8 is created but not running Oct 5 10:19:39.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4545' Oct 5 10:19:40.790: INFO: stderr: "" Oct 5 10:19:40.790: INFO: stdout: "update-demo-nautilus-mqdw8 update-demo-nautilus-wl88q " Oct 5 10:19:40.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqdw8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:19:42.031: INFO: stderr: "" Oct 5 10:19:42.032: INFO: stdout: "true" Oct 5 10:19:42.032: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqdw8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:19:43.301: INFO: stderr: "" Oct 5 10:19:43.301: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 10:19:43.301: INFO: validating pod update-demo-nautilus-mqdw8 Oct 5 10:19:43.308: INFO: got data: { "image": "nautilus.jpg" } Oct 5 10:19:43.308: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 10:19:43.308: INFO: update-demo-nautilus-mqdw8 is verified up and running Oct 5 10:19:43.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wl88q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:19:44.593: INFO: stderr: "" Oct 5 10:19:44.593: INFO: stdout: "true" Oct 5 10:19:44.593: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wl88q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:19:45.825: INFO: stderr: "" Oct 5 10:19:45.826: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 10:19:45.826: INFO: validating pod update-demo-nautilus-wl88q Oct 5 10:19:45.831: INFO: got data: { "image": "nautilus.jpg" } Oct 5 10:19:45.832: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 10:19:45.832: INFO: update-demo-nautilus-wl88q is verified up and running STEP: scaling down the replication controller Oct 5 10:19:45.844: INFO: scanned /root for discovery docs: Oct 5 10:19:45.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-4545' Oct 5 10:19:47.115: INFO: stderr: "" Oct 5 10:19:47.115: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 5 10:19:47.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4545' Oct 5 10:19:48.389: INFO: stderr: "" Oct 5 10:19:48.389: INFO: stdout: "update-demo-nautilus-mqdw8 update-demo-nautilus-wl88q " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 5 10:19:53.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4545' Oct 5 10:19:54.669: INFO: stderr: "" Oct 5 10:19:54.669: INFO: stdout: "update-demo-nautilus-mqdw8 update-demo-nautilus-wl88q " STEP: Replicas for name=update-demo: expected=1 actual=2 Oct 5 10:19:59.670: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4545' Oct 5 10:20:00.915: INFO: stderr: "" Oct 5 10:20:00.915: INFO: stdout: "update-demo-nautilus-wl88q " Oct 5 10:20:00.915: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wl88q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:20:02.154: INFO: stderr: "" Oct 5 10:20:02.154: INFO: stdout: "true" Oct 5 10:20:02.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wl88q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:20:03.405: INFO: stderr: "" Oct 5 10:20:03.405: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 10:20:03.405: INFO: validating pod update-demo-nautilus-wl88q Oct 5 10:20:03.410: INFO: got data: { "image": "nautilus.jpg" } Oct 5 10:20:03.411: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 10:20:03.411: INFO: update-demo-nautilus-wl88q is verified up and running STEP: scaling up the replication controller Oct 5 10:20:03.423: INFO: scanned /root for discovery docs: Oct 5 10:20:03.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-4545' Oct 5 10:20:04.714: INFO: stderr: "" Oct 5 10:20:04.715: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Oct 5 10:20:04.715: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4545' Oct 5 10:20:05.988: INFO: stderr: "" Oct 5 10:20:05.989: INFO: stdout: "update-demo-nautilus-tl22b update-demo-nautilus-wl88q " Oct 5 10:20:05.989: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tl22b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:20:07.203: INFO: stderr: "" Oct 5 10:20:07.203: INFO: stdout: "" Oct 5 10:20:07.204: INFO: update-demo-nautilus-tl22b is created but not running Oct 5 10:20:12.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4545' Oct 5 10:20:13.533: INFO: stderr: "" Oct 5 10:20:13.533: INFO: stdout: "update-demo-nautilus-tl22b update-demo-nautilus-wl88q " Oct 5 10:20:13.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tl22b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:20:14.729: INFO: stderr: "" Oct 5 10:20:14.730: INFO: stdout: "true" Oct 5 10:20:14.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tl22b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:20:15.958: INFO: stderr: "" Oct 5 10:20:15.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 10:20:15.958: INFO: validating pod update-demo-nautilus-tl22b Oct 5 10:20:15.964: INFO: got data: { "image": "nautilus.jpg" } Oct 5 10:20:15.964: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 10:20:15.964: INFO: update-demo-nautilus-tl22b is verified up and running Oct 5 10:20:15.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wl88q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:20:17.203: INFO: stderr: "" Oct 5 10:20:17.203: INFO: stdout: "true" Oct 5 10:20:17.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wl88q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4545' Oct 5 10:20:18.447: INFO: stderr: "" Oct 5 10:20:18.447: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Oct 5 10:20:18.447: INFO: validating pod update-demo-nautilus-wl88q Oct 5 10:20:18.453: INFO: got data: { "image": "nautilus.jpg" } Oct 5 10:20:18.454: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Oct 5 10:20:18.454: INFO: update-demo-nautilus-wl88q is verified up and running STEP: using delete to clean up resources Oct 5 10:20:18.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4545' Oct 5 10:20:19.641: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 10:20:19.641: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Oct 5 10:20:19.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4545' Oct 5 10:20:20.977: INFO: stderr: "No resources found in kubectl-4545 namespace.\n" Oct 5 10:20:20.977: INFO: stdout: "" Oct 5 10:20:20.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4545 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 5 10:20:22.189: INFO: stderr: "" Oct 5 10:20:22.189: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:20:22.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4545" for this suite. • [SLOW TEST:52.961 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:306 should scale a replication controller [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":303,"completed":94,"skipped":1733,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:20:22.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:20:29.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8106" for this suite. • [SLOW TEST:7.117 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":303,"completed":95,"skipped":1740,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:20:29.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in container's command Oct 5 10:20:29.448: INFO: Waiting up to 5m0s for pod "var-expansion-1208dec5-5ebe-44ab-9565-5350685f0b97" in namespace "var-expansion-222" to be "Succeeded or Failed" Oct 5 10:20:29.547: INFO: Pod "var-expansion-1208dec5-5ebe-44ab-9565-5350685f0b97": Phase="Pending", Reason="", readiness=false. Elapsed: 98.628834ms Oct 5 10:20:31.557: INFO: Pod "var-expansion-1208dec5-5ebe-44ab-9565-5350685f0b97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108703983s Oct 5 10:20:33.566: INFO: Pod "var-expansion-1208dec5-5ebe-44ab-9565-5350685f0b97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117218094s STEP: Saw pod success Oct 5 10:20:33.566: INFO: Pod "var-expansion-1208dec5-5ebe-44ab-9565-5350685f0b97" satisfied condition "Succeeded or Failed" Oct 5 10:20:33.570: INFO: Trying to get logs from node kali-worker2 pod var-expansion-1208dec5-5ebe-44ab-9565-5350685f0b97 container dapi-container: STEP: delete the pod Oct 5 10:20:33.618: INFO: Waiting for pod var-expansion-1208dec5-5ebe-44ab-9565-5350685f0b97 to disappear Oct 5 10:20:33.630: INFO: Pod var-expansion-1208dec5-5ebe-44ab-9565-5350685f0b97 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:20:33.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-222" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":303,"completed":96,"skipped":1743,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:20:33.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 10:20:33.751: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 10:20:33.771: INFO: Waiting for terminating namespaces to be deleted... Oct 5 10:20:33.779: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 5 10:20:33.789: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:20:33.789: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 10:20:33.789: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:20:33.789: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 10:20:33.789: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 5 10:20:33.805: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:20:33.806: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 10:20:33.806: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:20:33.806: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c0857e11-6022-44c2-9345-320521085edc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c0857e11-6022-44c2-9345-320521085edc off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c0857e11-6022-44c2-9345-320521085edc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:20:42.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2738" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:8.469 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":303,"completed":97,"skipped":1749,"failed":0} S ------------------------------ [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:20:42.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-2992 STEP: creating service affinity-clusterip-transition in namespace services-2992 STEP: creating replication controller affinity-clusterip-transition in namespace services-2992 I1005 10:20:42.247748 10 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-2992, replica count: 3 I1005 10:20:45.299586 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 10:20:48.300492 10 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 10:20:48.424: INFO: Creating new exec pod Oct 5 10:20:53.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2992 execpod-affinitycmd6h -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Oct 5 10:20:55.017: INFO: stderr: "I1005 10:20:54.876075 2292 log.go:181] (0x251e620) (0x251f730) Create stream\nI1005 10:20:54.879318 2292 log.go:181] (0x251e620) (0x251f730) Stream added, broadcasting: 1\nI1005 10:20:54.906024 2292 log.go:181] (0x251e620) Reply frame received for 1\nI1005 10:20:54.906530 2292 log.go:181] (0x251e620) (0x3130070) Create stream\nI1005 10:20:54.906605 2292 log.go:181] (0x251e620) (0x3130070) Stream added, broadcasting: 3\nI1005 10:20:54.907948 2292 log.go:181] (0x251e620) Reply frame received for 3\nI1005 10:20:54.908177 2292 log.go:181] (0x251e620) (0x26743f0) Create stream\nI1005 10:20:54.908247 2292 log.go:181] (0x251e620) (0x26743f0) Stream added, broadcasting: 5\nI1005 10:20:54.909342 2292 log.go:181] (0x251e620) Reply frame received for 5\nI1005 10:20:54.978135 2292 log.go:181] (0x251e620) Data frame received for 5\nI1005 10:20:54.978360 2292 log.go:181] (0x26743f0) (5) Data frame handling\nI1005 10:20:54.978812 2292 log.go:181] (0x26743f0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-clusterip-transition 80\nI1005 10:20:54.999965 2292 log.go:181] (0x251e620) Data frame received for 5\nI1005 10:20:55.000149 2292 log.go:181] (0x26743f0) (5) Data frame handling\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\nI1005 10:20:55.000280 2292 log.go:181] (0x251e620) Data frame received for 3\nI1005 10:20:55.000512 2292 log.go:181] (0x3130070) (3) Data frame handling\nI1005 10:20:55.000651 2292 log.go:181] (0x26743f0) (5) Data frame sent\nI1005 10:20:55.000808 2292 log.go:181] (0x251e620) Data frame received for 5\nI1005 10:20:55.001038 2292 log.go:181] (0x26743f0) (5) Data frame handling\nI1005 10:20:55.002348 2292 log.go:181] (0x251e620) Data frame received for 1\nI1005 10:20:55.002479 2292 log.go:181] (0x251f730) (1) Data frame handling\nI1005 10:20:55.002602 2292 log.go:181] (0x251f730) (1) Data frame sent\nI1005 10:20:55.003276 2292 log.go:181] (0x251e620) (0x251f730) Stream removed, broadcasting: 1\nI1005 10:20:55.005934 2292 log.go:181] (0x251e620) Go away received\nI1005 10:20:55.008058 2292 log.go:181] (0x251e620) (0x251f730) Stream removed, broadcasting: 1\nI1005 10:20:55.008592 2292 log.go:181] (0x251e620) (0x3130070) Stream removed, broadcasting: 3\nI1005 10:20:55.008776 2292 log.go:181] (0x251e620) (0x26743f0) Stream removed, broadcasting: 5\n" Oct 5 10:20:55.018: INFO: stdout: "" Oct 5 10:20:55.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2992 execpod-affinitycmd6h -- /bin/sh -x -c nc -zv -t -w 2 10.105.254.228 80' Oct 5 10:20:56.541: INFO: stderr: "I1005 10:20:56.411428 2312 log.go:181] (0x2c9a310) (0x2c9a380) Create stream\nI1005 10:20:56.416794 2312 log.go:181] (0x2c9a310) (0x2c9a380) Stream added, broadcasting: 1\nI1005 10:20:56.433862 2312 log.go:181] (0x2c9a310) Reply frame received for 1\nI1005 10:20:56.434357 2312 log.go:181] (0x2c9a310) (0x2c9a460) Create stream\nI1005 10:20:56.434424 2312 log.go:181] (0x2c9a310) (0x2c9a460) Stream added, broadcasting: 3\nI1005 10:20:56.435745 2312 log.go:181] (0x2c9a310) Reply frame received for 3\nI1005 10:20:56.435987 2312 log.go:181] (0x2c9a310) (0x30240e0) Create stream\nI1005 10:20:56.436046 2312 log.go:181] (0x2c9a310) (0x30240e0) Stream added, broadcasting: 5\nI1005 10:20:56.437487 2312 log.go:181] (0x2c9a310) Reply frame received for 5\nI1005 10:20:56.520182 2312 log.go:181] (0x2c9a310) Data frame received for 5\nI1005 10:20:56.520410 2312 log.go:181] (0x2c9a310) Data frame received for 3\nI1005 10:20:56.520826 2312 log.go:181] (0x2c9a310) Data frame received for 1\nI1005 10:20:56.521086 2312 log.go:181] (0x2c9a380) (1) Data frame handling\nI1005 10:20:56.521625 2312 log.go:181] (0x2c9a460) (3) Data frame handling\nI1005 10:20:56.521934 2312 log.go:181] (0x30240e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.254.228 80\nConnection to 10.105.254.228 80 port [tcp/http] succeeded!\nI1005 10:20:56.524464 2312 log.go:181] (0x2c9a380) (1) Data frame sent\nI1005 10:20:56.524773 2312 log.go:181] (0x30240e0) (5) Data frame sent\nI1005 10:20:56.524982 2312 log.go:181] (0x2c9a310) Data frame received for 5\nI1005 10:20:56.525088 2312 log.go:181] (0x30240e0) (5) Data frame handling\nI1005 10:20:56.525775 2312 log.go:181] (0x2c9a310) (0x2c9a380) Stream removed, broadcasting: 1\nI1005 10:20:56.526372 2312 log.go:181] (0x2c9a310) Go away received\nI1005 10:20:56.530201 2312 log.go:181] (0x2c9a310) (0x2c9a380) Stream removed, broadcasting: 1\nI1005 10:20:56.530404 2312 log.go:181] (0x2c9a310) (0x2c9a460) Stream removed, broadcasting: 3\nI1005 10:20:56.530565 2312 log.go:181] (0x2c9a310) (0x30240e0) Stream removed, broadcasting: 5\n" Oct 5 10:20:56.542: INFO: stdout: "" Oct 5 10:20:56.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2992 execpod-affinitycmd6h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.254.228:80/ ; done' Oct 5 10:20:58.116: INFO: stderr: "I1005 10:20:57.889033 2332 log.go:181] (0x299e0e0) (0x299e2a0) Create stream\nI1005 10:20:57.893366 2332 log.go:181] (0x299e0e0) (0x299e2a0) Stream added, broadcasting: 1\nI1005 10:20:57.912683 2332 log.go:181] (0x299e0e0) Reply frame received for 1\nI1005 10:20:57.913381 2332 log.go:181] (0x299e0e0) (0x299e620) Create stream\nI1005 10:20:57.913439 2332 log.go:181] (0x299e0e0) (0x299e620) Stream added, broadcasting: 3\nI1005 10:20:57.915307 2332 log.go:181] (0x299e0e0) Reply frame received for 3\nI1005 10:20:57.915602 2332 log.go:181] (0x299e0e0) (0x3014070) Create stream\nI1005 10:20:57.915677 2332 log.go:181] (0x299e0e0) (0x3014070) Stream added, broadcasting: 5\nI1005 10:20:57.917667 2332 log.go:181] (0x299e0e0) Reply frame received for 5\nI1005 10:20:57.999137 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:57.999500 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:57.999742 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:57.999855 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.000716 2332 log.go:181] (0x3014070) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.001466 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.003576 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.003746 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.003887 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.004011 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.004125 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.004211 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.004315 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.004405 2332 log.go:181] (0x3014070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.004492 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.009375 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.009511 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.009633 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.009796 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.009895 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.010017 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.010175 2332 log.go:181] (0x3014070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.010308 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.010448 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.015071 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.015177 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.015287 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.015903 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.016013 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.016110 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.016208 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.016290 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.016379 2332 log.go:181] (0x3014070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.022891 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.023019 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.023158 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.023740 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.023865 2332 log.go:181] (0x3014070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.023995 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.024157 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.024275 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.024372 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.031175 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.031293 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.031450 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.032069 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.032183 2332 log.go:181] (0x3014070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.032274 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.032414 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.032512 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.032644 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.038957 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.039099 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.039295 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.039892 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.040069 2332 log.go:181] (0x3014070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.040230 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.040410 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.040543 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.040721 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.045402 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.045528 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.045673 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.046371 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.046510 2332 log.go:181] (0x3014070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.046639 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.046770 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.046894 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.047013 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.051001 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.051149 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.051303 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.051964 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.052096 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.052196 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.052302 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.052394 2332 log.go:181] (0x3014070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI1005 10:20:58.052508 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.052957 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.053124 2332 log.go:181] (0x3014070) (5) Data frame sent\n 2 http://10.105.254.228:80/\nI1005 10:20:58.053282 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.056728 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.056820 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.057010 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.057477 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.057582 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.057672 2332 log.go:181] (0x3014070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.057763 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.057837 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.057926 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.061381 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.061497 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.061737 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.062146 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.062251 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.062349 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.062441 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.062522 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.062610 2332 log.go:181] (0x3014070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.068520 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.068689 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.068815 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.069589 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.069725 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.069914 2332 log.go:181] (0x3014070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI1005 10:20:58.070083 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.070238 2332 log.go:181] (0x3014070) (5) Data frame handling\n 2 http://10.105.254.228:80/\nI1005 10:20:58.070406 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.070575 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.070746 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.070925 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.075068 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.075251 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.075466 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.075802 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.075916 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.076073 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.076209 2332 log.go:181] (0x3014070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.076400 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.076584 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.082167 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.082303 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.082443 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.082779 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.082926 2332 log.go:181] (0x3014070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.083039 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.083187 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.083272 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.083367 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.086348 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.086462 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.086566 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.086736 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.086878 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.086977 2332 log.go:181] (0x299e0e0) Data frame received for 3\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.087055 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.087121 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.087179 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.092590 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.092666 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.092756 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.093475 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.093638 2332 log.go:181] (0x3014070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:58.093792 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.093902 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.094001 2332 log.go:181] (0x3014070) (5) Data frame sent\nI1005 10:20:58.094137 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.098344 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.098447 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.098571 2332 log.go:181] (0x299e620) (3) Data frame sent\nI1005 10:20:58.098952 2332 log.go:181] (0x299e0e0) Data frame received for 5\nI1005 10:20:58.099017 2332 log.go:181] (0x3014070) (5) Data frame handling\nI1005 10:20:58.099213 2332 log.go:181] (0x299e0e0) Data frame received for 3\nI1005 10:20:58.099294 2332 log.go:181] (0x299e620) (3) Data frame handling\nI1005 10:20:58.101272 2332 log.go:181] (0x299e0e0) Data frame received for 1\nI1005 10:20:58.101363 2332 log.go:181] (0x299e2a0) (1) Data frame handling\nI1005 10:20:58.101458 2332 log.go:181] (0x299e2a0) (1) Data frame sent\nI1005 10:20:58.102334 2332 log.go:181] (0x299e0e0) (0x299e2a0) Stream removed, broadcasting: 1\nI1005 10:20:58.103911 2332 log.go:181] (0x299e0e0) Go away received\nI1005 10:20:58.106745 2332 log.go:181] (0x299e0e0) (0x299e2a0) Stream removed, broadcasting: 1\nI1005 10:20:58.106946 2332 log.go:181] (0x299e0e0) (0x299e620) Stream removed, broadcasting: 3\nI1005 10:20:58.107102 2332 log.go:181] (0x299e0e0) (0x3014070) Stream removed, broadcasting: 5\n" Oct 5 10:20:58.122: INFO: stdout: "\naffinity-clusterip-transition-vml7h\naffinity-clusterip-transition-zdc5v\naffinity-clusterip-transition-vml7h\naffinity-clusterip-transition-vml7h\naffinity-clusterip-transition-vml7h\naffinity-clusterip-transition-zdc5v\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-zdc5v\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-vml7h\naffinity-clusterip-transition-zdc5v\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-vml7h" Oct 5 10:20:58.122: INFO: Received response from host: affinity-clusterip-transition-vml7h Oct 5 10:20:58.122: INFO: Received response from host: affinity-clusterip-transition-zdc5v Oct 5 10:20:58.122: INFO: Received response from host: affinity-clusterip-transition-vml7h Oct 5 10:20:58.122: INFO: Received response from host: affinity-clusterip-transition-vml7h Oct 5 10:20:58.122: INFO: Received response from host: affinity-clusterip-transition-vml7h Oct 5 10:20:58.122: INFO: Received response from host: affinity-clusterip-transition-zdc5v Oct 5 10:20:58.122: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-zdc5v Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-vml7h Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-zdc5v Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:58.123: INFO: Received response from host: affinity-clusterip-transition-vml7h Oct 5 10:20:58.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-2992 execpod-affinitycmd6h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.105.254.228:80/ ; done' Oct 5 10:20:59.808: INFO: stderr: "I1005 10:20:59.578383 2353 log.go:181] (0x264ba40) (0x264bce0) Create stream\nI1005 10:20:59.581486 2353 log.go:181] (0x264ba40) (0x264bce0) Stream added, broadcasting: 1\nI1005 10:20:59.589829 2353 log.go:181] (0x264ba40) Reply frame received for 1\nI1005 10:20:59.590656 2353 log.go:181] (0x264ba40) (0x30100e0) Create stream\nI1005 10:20:59.590761 2353 log.go:181] (0x264ba40) (0x30100e0) Stream added, broadcasting: 3\nI1005 10:20:59.592455 2353 log.go:181] (0x264ba40) Reply frame received for 3\nI1005 10:20:59.592782 2353 log.go:181] (0x264ba40) (0x2798460) Create stream\nI1005 10:20:59.592915 2353 log.go:181] (0x264ba40) (0x2798460) Stream added, broadcasting: 5\nI1005 10:20:59.594199 2353 log.go:181] (0x264ba40) Reply frame received for 5\nI1005 10:20:59.686962 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.687281 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.687427 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.687522 2353 log.go:181] (0x2798460) (5) Data frame handling\nI1005 10:20:59.688222 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.688434 2353 log.go:181] (0x30100e0) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.692953 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.693127 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.693310 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.693973 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.694121 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.694281 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.694464 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.694597 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.694785 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.699365 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.699532 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.699704 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.700163 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.700278 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.700416 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.700577 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.700692 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.700812 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.704076 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.704331 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.704574 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.705204 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.705434 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1005 10:20:59.705686 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.705833 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.705955 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.706163 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.706342 2353 log.go:181] (0x2798460) (5) Data frame handling\n http://10.105.254.228:80/\nI1005 10:20:59.706441 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.706566 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.709292 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.709405 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.709529 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.710101 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.710221 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.710333 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.710470 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.710579 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.710678 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.714435 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.714618 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.714776 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.715613 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.715820 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.715972 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.716095 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.716237 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.716351 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.721996 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.722207 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.722353 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.722493 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.722630 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.722760 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.722908 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.723015 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.723179 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.727450 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.727546 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.727691 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.728125 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.728233 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.728305 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.728385 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.728449 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.728563 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.733477 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.733559 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.733652 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.734230 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.734385 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/I1005 10:20:59.734491 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.734661 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.734813 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.734937 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.735048 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.735149 2353 log.go:181] (0x2798460) (5) Data frame handling\nI1005 10:20:59.735276 2353 log.go:181] (0x2798460) (5) Data frame sent\n\nI1005 10:20:59.738992 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.739161 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.739282 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.739672 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.739791 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.739879 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.739986 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.740070 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.740148 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.746275 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.746403 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.746520 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.746878 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.746980 2353 log.go:181] (0x2798460) (5) Data frame handling\nI1005 10:20:59.747086 2353 log.go:181] (0x2798460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.747180 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.747269 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.747374 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.754126 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.754284 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.754456 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.755036 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.755281 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.755508 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.755724 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.755862 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.756023 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.761556 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.761660 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.761791 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.762628 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.762751 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.762887 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.763024 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.763109 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.763213 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.768555 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.768660 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.768756 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.769331 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.769476 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.769581 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.769722 2353 log.go:181] (0x2798460) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.769870 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.769985 2353 log.go:181] (0x2798460) (5) Data frame sent\nI1005 10:20:59.776191 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.776334 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.776490 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.777240 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.777389 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.777495 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.777609 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.777704 2353 log.go:181] (0x2798460) (5) Data frame handling\nI1005 10:20:59.777798 2353 log.go:181] (0x2798460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.782211 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.782339 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.782478 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.783026 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.783229 2353 log.go:181] (0x2798460) (5) Data frame handling\nI1005 10:20:59.783410 2353 log.go:181] (0x2798460) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.105.254.228:80/\nI1005 10:20:59.783552 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.783689 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.783859 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.789363 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.789448 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.789547 2353 log.go:181] (0x30100e0) (3) Data frame sent\nI1005 10:20:59.790502 2353 log.go:181] (0x264ba40) Data frame received for 3\nI1005 10:20:59.790604 2353 log.go:181] (0x264ba40) Data frame received for 5\nI1005 10:20:59.790788 2353 log.go:181] (0x2798460) (5) Data frame handling\nI1005 10:20:59.791049 2353 log.go:181] (0x30100e0) (3) Data frame handling\nI1005 10:20:59.792655 2353 log.go:181] (0x264ba40) Data frame received for 1\nI1005 10:20:59.792766 2353 log.go:181] (0x264bce0) (1) Data frame handling\nI1005 10:20:59.793047 2353 log.go:181] (0x264bce0) (1) Data frame sent\nI1005 10:20:59.794260 2353 log.go:181] (0x264ba40) (0x264bce0) Stream removed, broadcasting: 1\nI1005 10:20:59.797064 2353 log.go:181] (0x264ba40) Go away received\nI1005 10:20:59.798902 2353 log.go:181] (0x264ba40) (0x264bce0) Stream removed, broadcasting: 1\nI1005 10:20:59.799285 2353 log.go:181] (0x264ba40) (0x30100e0) Stream removed, broadcasting: 3\nI1005 10:20:59.799423 2353 log.go:181] (0x264ba40) (0x2798460) Stream removed, broadcasting: 5\n" Oct 5 10:20:59.812: INFO: stdout: "\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps\naffinity-clusterip-transition-r89ps" Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Received response from host: affinity-clusterip-transition-r89ps Oct 5 10:20:59.813: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-2992, will wait for the garbage collector to delete the pods Oct 5 10:20:59.942: INFO: Deleting ReplicationController affinity-clusterip-transition took: 8.992786ms Oct 5 10:21:00.443: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 501.185737ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:21:08.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2992" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:26.685 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":303,"completed":98,"skipped":1750,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:21:08.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 10:21:08.922: INFO: Waiting up to 5m0s for pod "downward-api-19c21010-44a0-4165-bc12-90cb5019316f" in namespace "downward-api-5557" to be "Succeeded or Failed" Oct 5 10:21:08.935: INFO: Pod "downward-api-19c21010-44a0-4165-bc12-90cb5019316f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.391256ms Oct 5 10:21:11.087: INFO: Pod "downward-api-19c21010-44a0-4165-bc12-90cb5019316f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164533831s Oct 5 10:21:13.094: INFO: Pod "downward-api-19c21010-44a0-4165-bc12-90cb5019316f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.172022453s STEP: Saw pod success Oct 5 10:21:13.094: INFO: Pod "downward-api-19c21010-44a0-4165-bc12-90cb5019316f" satisfied condition "Succeeded or Failed" Oct 5 10:21:13.098: INFO: Trying to get logs from node kali-worker pod downward-api-19c21010-44a0-4165-bc12-90cb5019316f container dapi-container: STEP: delete the pod Oct 5 10:21:13.477: INFO: Waiting for pod downward-api-19c21010-44a0-4165-bc12-90cb5019316f to disappear Oct 5 10:21:13.489: INFO: Pod downward-api-19c21010-44a0-4165-bc12-90cb5019316f no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:21:13.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5557" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":303,"completed":99,"skipped":1778,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:21:13.545: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:21:13.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8535" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":303,"completed":100,"skipped":1789,"failed":0} SSSS ------------------------------ [sig-instrumentation] Events API should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:21:13.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events Oct 5 10:21:13.798: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:21:13.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2648" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":303,"completed":101,"skipped":1793,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:21:13.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-2af7d051-3d3e-45ac-a7c6-3bab100e4792 in namespace container-probe-118 Oct 5 10:21:17.996: INFO: Started pod busybox-2af7d051-3d3e-45ac-a7c6-3bab100e4792 in namespace container-probe-118 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 10:21:18.002: INFO: Initial restart count of pod busybox-2af7d051-3d3e-45ac-a7c6-3bab100e4792 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:25:19.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-118" for this suite. • [SLOW TEST:245.304 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":102,"skipped":1802,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:25:19.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W1005 10:25:32.192257 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 10:26:34.295: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 5 10:26:34.295: INFO: Deleting pod "simpletest-rc-to-be-deleted-6tpdm" in namespace "gc-1083" Oct 5 10:26:34.319: INFO: Deleting pod "simpletest-rc-to-be-deleted-8m8sh" in namespace "gc-1083" Oct 5 10:26:34.378: INFO: Deleting pod "simpletest-rc-to-be-deleted-bprnx" in namespace "gc-1083" Oct 5 10:26:34.443: INFO: Deleting pod "simpletest-rc-to-be-deleted-fj4h5" in namespace "gc-1083" Oct 5 10:26:34.498: INFO: Deleting pod "simpletest-rc-to-be-deleted-h9cjv" in namespace "gc-1083" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:26:34.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1083" for this suite. • [SLOW TEST:75.868 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":303,"completed":103,"skipped":1806,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:26:35.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:26:43.805: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:26:46.019: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490403, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490403, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490403, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490403, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:26:49.100: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:26:49.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:26:50.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7018" for this suite. STEP: Destroying namespace "webhook-7018-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.364 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":303,"completed":104,"skipped":1818,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:26:50.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:26:50.480: INFO: Waiting up to 5m0s for pod "downwardapi-volume-39ac3fa5-f446-4f9f-9da7-f8ffed0cba9d" in namespace "projected-3045" to be "Succeeded or Failed" Oct 5 10:26:50.553: INFO: Pod "downwardapi-volume-39ac3fa5-f446-4f9f-9da7-f8ffed0cba9d": Phase="Pending", Reason="", readiness=false. Elapsed: 72.433219ms Oct 5 10:26:52.561: INFO: Pod "downwardapi-volume-39ac3fa5-f446-4f9f-9da7-f8ffed0cba9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080778102s Oct 5 10:26:54.568: INFO: Pod "downwardapi-volume-39ac3fa5-f446-4f9f-9da7-f8ffed0cba9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087365325s STEP: Saw pod success Oct 5 10:26:54.568: INFO: Pod "downwardapi-volume-39ac3fa5-f446-4f9f-9da7-f8ffed0cba9d" satisfied condition "Succeeded or Failed" Oct 5 10:26:54.572: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-39ac3fa5-f446-4f9f-9da7-f8ffed0cba9d container client-container: STEP: delete the pod Oct 5 10:26:54.623: INFO: Waiting for pod downwardapi-volume-39ac3fa5-f446-4f9f-9da7-f8ffed0cba9d to disappear Oct 5 10:26:54.647: INFO: Pod downwardapi-volume-39ac3fa5-f446-4f9f-9da7-f8ffed0cba9d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:26:54.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3045" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":105,"skipped":1822,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:26:54.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Oct 5 10:26:55.039: INFO: >>> kubeConfig: /root/.kube/config Oct 5 10:27:15.711: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:28:18.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4827" for this suite. • [SLOW TEST:83.584 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":303,"completed":106,"skipped":1833,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:28:18.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:28:18.406: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Oct 5 10:28:18.426: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:18.442: INFO: Number of nodes with available pods: 0 Oct 5 10:28:18.442: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:19.454: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:19.460: INFO: Number of nodes with available pods: 0 Oct 5 10:28:19.460: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:20.668: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:20.861: INFO: Number of nodes with available pods: 0 Oct 5 10:28:20.861: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:21.665: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:22.084: INFO: Number of nodes with available pods: 0 Oct 5 10:28:22.084: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:22.486: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:22.493: INFO: Number of nodes with available pods: 0 Oct 5 10:28:22.493: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:23.474: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:23.487: INFO: Number of nodes with available pods: 1 Oct 5 10:28:23.487: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:24.453: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:24.461: INFO: Number of nodes with available pods: 2 Oct 5 10:28:24.462: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Oct 5 10:28:24.524: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:24.525: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:24.545: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:25.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:25.556: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:25.563: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:26.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:26.555: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:26.567: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:27.556: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:27.556: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:27.556: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:27.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:28.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:28.555: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:28.556: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:28.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:29.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:29.556: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:29.556: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:29.567: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:30.554: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:30.554: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:30.555: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:30.566: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:31.561: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:31.561: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:31.561: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:31.573: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:32.554: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:32.554: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:32.554: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:32.563: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:33.554: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:33.554: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:33.555: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:33.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:34.553: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:34.554: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:34.554: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:34.564: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:35.554: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:35.554: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:35.555: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:35.564: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:36.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:36.555: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:36.555: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:36.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:37.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:37.555: INFO: Wrong image for pod: daemon-set-w8fl9. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:37.555: INFO: Pod daemon-set-w8fl9 is not available Oct 5 10:28:37.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:38.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:38.556: INFO: Pod daemon-set-qgsvk is not available Oct 5 10:28:38.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:39.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:39.555: INFO: Pod daemon-set-qgsvk is not available Oct 5 10:28:39.566: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:40.555: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:40.555: INFO: Pod daemon-set-qgsvk is not available Oct 5 10:28:40.567: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:41.554: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:41.565: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:42.552: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:42.562: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:43.554: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:43.554: INFO: Pod daemon-set-5729x is not available Oct 5 10:28:43.562: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:44.554: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:44.554: INFO: Pod daemon-set-5729x is not available Oct 5 10:28:44.564: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:45.552: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:45.552: INFO: Pod daemon-set-5729x is not available Oct 5 10:28:45.558: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:46.578: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:46.578: INFO: Pod daemon-set-5729x is not available Oct 5 10:28:46.594: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:47.553: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:47.553: INFO: Pod daemon-set-5729x is not available Oct 5 10:28:47.561: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:48.554: INFO: Wrong image for pod: daemon-set-5729x. Expected: k8s.gcr.io/e2e-test-images/agnhost:2.20, got: docker.io/library/httpd:2.4.38-alpine. Oct 5 10:28:48.554: INFO: Pod daemon-set-5729x is not available Oct 5 10:28:48.563: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:49.555: INFO: Pod daemon-set-5fh5w is not available Oct 5 10:28:49.562: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Oct 5 10:28:49.571: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:49.577: INFO: Number of nodes with available pods: 1 Oct 5 10:28:49.577: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:50.588: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:50.594: INFO: Number of nodes with available pods: 1 Oct 5 10:28:50.594: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:51.588: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:51.595: INFO: Number of nodes with available pods: 1 Oct 5 10:28:51.595: INFO: Node kali-worker is running more than one daemon pod Oct 5 10:28:52.587: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 10:28:52.593: INFO: Number of nodes with available pods: 2 Oct 5 10:28:52.593: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1104, will wait for the garbage collector to delete the pods Oct 5 10:28:52.687: INFO: Deleting DaemonSet.extensions daemon-set took: 9.980592ms Oct 5 10:28:53.088: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.02128ms Oct 5 10:28:58.695: INFO: Number of nodes with available pods: 0 Oct 5 10:28:58.695: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 10:28:58.700: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1104/daemonsets","resourceVersion":"3161834"},"items":null} Oct 5 10:28:58.705: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1104/pods","resourceVersion":"3161834"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:28:58.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1104" for this suite. • [SLOW TEST:40.509 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":303,"completed":107,"skipped":1841,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:28:58.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 5 10:29:07.028: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 10:29:07.052: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 10:29:09.053: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 10:29:09.063: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 10:29:11.053: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 10:29:11.062: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 10:29:13.053: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 10:29:13.062: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 10:29:15.053: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 10:29:15.061: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 10:29:17.053: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 10:29:17.062: INFO: Pod pod-with-prestop-exec-hook still exists Oct 5 10:29:19.053: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Oct 5 10:29:19.061: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:29:19.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4811" for this suite. • [SLOW TEST:20.319 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":303,"completed":108,"skipped":1861,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:29:19.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7050 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7050 I1005 10:29:19.333941 10 runners.go:190] Created replication controller with name: externalname-service, namespace: services-7050, replica count: 2 I1005 10:29:22.385477 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 10:29:25.386607 10 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 10:29:25.386: INFO: Creating new exec pod Oct 5 10:29:30.425: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7050 execpodn85dg -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Oct 5 10:29:35.018: INFO: stderr: "I1005 10:29:34.895202 2373 log.go:181] (0x24fff80) (0x2a5c000) Create stream\nI1005 10:29:34.898225 2373 log.go:181] (0x24fff80) (0x2a5c000) Stream added, broadcasting: 1\nI1005 10:29:34.911983 2373 log.go:181] (0x24fff80) Reply frame received for 1\nI1005 10:29:34.912594 2373 log.go:181] (0x24fff80) (0x28f0460) Create stream\nI1005 10:29:34.912676 2373 log.go:181] (0x24fff80) (0x28f0460) Stream added, broadcasting: 3\nI1005 10:29:34.914868 2373 log.go:181] (0x24fff80) Reply frame received for 3\nI1005 10:29:34.915354 2373 log.go:181] (0x24fff80) (0x2ef8070) Create stream\nI1005 10:29:34.915520 2373 log.go:181] (0x24fff80) (0x2ef8070) Stream added, broadcasting: 5\nI1005 10:29:34.917133 2373 log.go:181] (0x24fff80) Reply frame received for 5\nI1005 10:29:34.990955 2373 log.go:181] (0x24fff80) Data frame received for 3\nI1005 10:29:34.991149 2373 log.go:181] (0x28f0460) (3) Data frame handling\nI1005 10:29:34.991329 2373 log.go:181] (0x24fff80) Data frame received for 5\nI1005 10:29:34.991455 2373 log.go:181] (0x2ef8070) (5) Data frame handling\nI1005 10:29:34.992350 2373 log.go:181] (0x2ef8070) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI1005 10:29:34.993232 2373 log.go:181] (0x24fff80) Data frame received for 5\nI1005 10:29:34.993349 2373 log.go:181] (0x2ef8070) (5) Data frame handling\nI1005 10:29:34.994245 2373 log.go:181] (0x24fff80) Data frame received for 1\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI1005 10:29:34.994463 2373 log.go:181] (0x2a5c000) (1) Data frame handling\nI1005 10:29:34.994569 2373 log.go:181] (0x2ef8070) (5) Data frame sent\nI1005 10:29:34.994746 2373 log.go:181] (0x24fff80) Data frame received for 5\nI1005 10:29:34.994847 2373 log.go:181] (0x2ef8070) (5) Data frame handling\nI1005 10:29:34.995046 2373 log.go:181] (0x2a5c000) (1) Data frame sent\nI1005 10:29:34.995851 2373 log.go:181] (0x24fff80) (0x2a5c000) Stream removed, broadcasting: 1\nI1005 10:29:35.007759 2373 log.go:181] (0x24fff80) Go away received\nI1005 10:29:35.009561 2373 log.go:181] (0x24fff80) (0x2a5c000) Stream removed, broadcasting: 1\nI1005 10:29:35.010409 2373 log.go:181] (0x24fff80) (0x28f0460) Stream removed, broadcasting: 3\nI1005 10:29:35.010723 2373 log.go:181] (0x24fff80) (0x2ef8070) Stream removed, broadcasting: 5\n" Oct 5 10:29:35.019: INFO: stdout: "" Oct 5 10:29:35.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7050 execpodn85dg -- /bin/sh -x -c nc -zv -t -w 2 10.103.164.135 80' Oct 5 10:29:36.537: INFO: stderr: "I1005 10:29:36.422087 2393 log.go:181] (0x2590070) (0x25900e0) Create stream\nI1005 10:29:36.423838 2393 log.go:181] (0x2590070) (0x25900e0) Stream added, broadcasting: 1\nI1005 10:29:36.433647 2393 log.go:181] (0x2590070) Reply frame received for 1\nI1005 10:29:36.434402 2393 log.go:181] (0x2590070) (0x2590310) Create stream\nI1005 10:29:36.434536 2393 log.go:181] (0x2590070) (0x2590310) Stream added, broadcasting: 3\nI1005 10:29:36.436284 2393 log.go:181] (0x2590070) Reply frame received for 3\nI1005 10:29:36.436477 2393 log.go:181] (0x2590070) (0x25c2070) Create stream\nI1005 10:29:36.436530 2393 log.go:181] (0x2590070) (0x25c2070) Stream added, broadcasting: 5\nI1005 10:29:36.438094 2393 log.go:181] (0x2590070) Reply frame received for 5\nI1005 10:29:36.519777 2393 log.go:181] (0x2590070) Data frame received for 3\nI1005 10:29:36.520149 2393 log.go:181] (0x2590070) Data frame received for 5\nI1005 10:29:36.520262 2393 log.go:181] (0x2590310) (3) Data frame handling\nI1005 10:29:36.520544 2393 log.go:181] (0x25c2070) (5) Data frame handling\nI1005 10:29:36.520810 2393 log.go:181] (0x2590070) Data frame received for 1\nI1005 10:29:36.520979 2393 log.go:181] (0x25900e0) (1) Data frame handling\nI1005 10:29:36.522382 2393 log.go:181] (0x25c2070) (5) Data frame sent\nI1005 10:29:36.522754 2393 log.go:181] (0x25900e0) (1) Data frame sent\nI1005 10:29:36.523350 2393 log.go:181] (0x2590070) Data frame received for 5\nI1005 10:29:36.523466 2393 log.go:181] (0x25c2070) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.164.135 80\nConnection to 10.103.164.135 80 port [tcp/http] succeeded!\nI1005 10:29:36.525174 2393 log.go:181] (0x2590070) (0x25900e0) Stream removed, broadcasting: 1\nI1005 10:29:36.525794 2393 log.go:181] (0x2590070) Go away received\nI1005 10:29:36.528034 2393 log.go:181] (0x2590070) (0x25900e0) Stream removed, broadcasting: 1\nI1005 10:29:36.528242 2393 log.go:181] (0x2590070) (0x2590310) Stream removed, broadcasting: 3\nI1005 10:29:36.528424 2393 log.go:181] (0x2590070) (0x25c2070) Stream removed, broadcasting: 5\n" Oct 5 10:29:36.538: INFO: stdout: "" Oct 5 10:29:36.540: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7050 execpodn85dg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31495' Oct 5 10:29:38.062: INFO: stderr: "I1005 10:29:37.930864 2413 log.go:181] (0x2c68000) (0x2c68070) Create stream\nI1005 10:29:37.932881 2413 log.go:181] (0x2c68000) (0x2c68070) Stream added, broadcasting: 1\nI1005 10:29:37.941663 2413 log.go:181] (0x2c68000) Reply frame received for 1\nI1005 10:29:37.942334 2413 log.go:181] (0x2c68000) (0x2c68310) Create stream\nI1005 10:29:37.942451 2413 log.go:181] (0x2c68000) (0x2c68310) Stream added, broadcasting: 3\nI1005 10:29:37.944039 2413 log.go:181] (0x2c68000) Reply frame received for 3\nI1005 10:29:37.944396 2413 log.go:181] (0x2c68000) (0x30049a0) Create stream\nI1005 10:29:37.944472 2413 log.go:181] (0x2c68000) (0x30049a0) Stream added, broadcasting: 5\nI1005 10:29:37.945909 2413 log.go:181] (0x2c68000) Reply frame received for 5\nI1005 10:29:38.043061 2413 log.go:181] (0x2c68000) Data frame received for 3\nI1005 10:29:38.043390 2413 log.go:181] (0x2c68310) (3) Data frame handling\nI1005 10:29:38.043664 2413 log.go:181] (0x2c68000) Data frame received for 5\nI1005 10:29:38.043859 2413 log.go:181] (0x30049a0) (5) Data frame handling\nI1005 10:29:38.044049 2413 log.go:181] (0x2c68000) Data frame received for 1\nI1005 10:29:38.044135 2413 log.go:181] (0x2c68070) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 31495\nConnection to 172.18.0.12 31495 port [tcp/31495] succeeded!\nI1005 10:29:38.046287 2413 log.go:181] (0x30049a0) (5) Data frame sent\nI1005 10:29:38.046371 2413 log.go:181] (0x2c68070) (1) Data frame sent\nI1005 10:29:38.046619 2413 log.go:181] (0x2c68000) Data frame received for 5\nI1005 10:29:38.046759 2413 log.go:181] (0x30049a0) (5) Data frame handling\nI1005 10:29:38.047660 2413 log.go:181] (0x2c68000) (0x2c68070) Stream removed, broadcasting: 1\nI1005 10:29:38.048763 2413 log.go:181] (0x2c68000) Go away received\nI1005 10:29:38.052108 2413 log.go:181] (0x2c68000) (0x2c68070) Stream removed, broadcasting: 1\nI1005 10:29:38.052337 2413 log.go:181] (0x2c68000) (0x2c68310) Stream removed, broadcasting: 3\nI1005 10:29:38.052521 2413 log.go:181] (0x2c68000) (0x30049a0) Stream removed, broadcasting: 5\n" Oct 5 10:29:38.063: INFO: stdout: "" Oct 5 10:29:38.063: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7050 execpodn85dg -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31495' Oct 5 10:29:39.619: INFO: stderr: "I1005 10:29:39.486594 2433 log.go:181] (0x2e4c000) (0x2e4c070) Create stream\nI1005 10:29:39.491425 2433 log.go:181] (0x2e4c000) (0x2e4c070) Stream added, broadcasting: 1\nI1005 10:29:39.500820 2433 log.go:181] (0x2e4c000) Reply frame received for 1\nI1005 10:29:39.501768 2433 log.go:181] (0x2e4c000) (0x2e4c230) Create stream\nI1005 10:29:39.501846 2433 log.go:181] (0x2e4c000) (0x2e4c230) Stream added, broadcasting: 3\nI1005 10:29:39.503702 2433 log.go:181] (0x2e4c000) Reply frame received for 3\nI1005 10:29:39.504201 2433 log.go:181] (0x2e4c000) (0x2682770) Create stream\nI1005 10:29:39.504312 2433 log.go:181] (0x2e4c000) (0x2682770) Stream added, broadcasting: 5\nI1005 10:29:39.506360 2433 log.go:181] (0x2e4c000) Reply frame received for 5\nI1005 10:29:39.598544 2433 log.go:181] (0x2e4c000) Data frame received for 3\nI1005 10:29:39.598842 2433 log.go:181] (0x2e4c230) (3) Data frame handling\nI1005 10:29:39.599106 2433 log.go:181] (0x2e4c000) Data frame received for 5\nI1005 10:29:39.599232 2433 log.go:181] (0x2682770) (5) Data frame handling\nI1005 10:29:39.599449 2433 log.go:181] (0x2e4c000) Data frame received for 1\nI1005 10:29:39.599610 2433 log.go:181] (0x2e4c070) (1) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31495\nConnection to 172.18.0.13 31495 port [tcp/31495] succeeded!\nI1005 10:29:39.601371 2433 log.go:181] (0x2682770) (5) Data frame sent\nI1005 10:29:39.601987 2433 log.go:181] (0x2e4c070) (1) Data frame sent\nI1005 10:29:39.602359 2433 log.go:181] (0x2e4c000) Data frame received for 5\nI1005 10:29:39.602521 2433 log.go:181] (0x2e4c000) (0x2e4c070) Stream removed, broadcasting: 1\nI1005 10:29:39.604019 2433 log.go:181] (0x2682770) (5) Data frame handling\nI1005 10:29:39.605398 2433 log.go:181] (0x2e4c000) Go away received\nI1005 10:29:39.607663 2433 log.go:181] (0x2e4c000) (0x2e4c070) Stream removed, broadcasting: 1\nI1005 10:29:39.608142 2433 log.go:181] (0x2e4c000) (0x2e4c230) Stream removed, broadcasting: 3\nI1005 10:29:39.608340 2433 log.go:181] (0x2e4c000) (0x2682770) Stream removed, broadcasting: 5\n" Oct 5 10:29:39.620: INFO: stdout: "" Oct 5 10:29:39.621: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:29:39.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7050" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.569 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":303,"completed":109,"skipped":1874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:29:39.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:29:43.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2760" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":110,"skipped":1901,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:29:43.813: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-8aeb1d3d-2bd0-4309-bd2e-fca3117e13c3 STEP: Creating secret with name s-test-opt-upd-db758741-3a4d-4cdf-9765-c14ee429290f STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8aeb1d3d-2bd0-4309-bd2e-fca3117e13c3 STEP: Updating secret s-test-opt-upd-db758741-3a4d-4cdf-9765-c14ee429290f STEP: Creating secret with name s-test-opt-create-f5ee7c12-91ba-48a3-a560-b06941c1e357 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:31:20.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2588" for this suite. • [SLOW TEST:96.934 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":111,"skipped":1909,"failed":0} SSSS ------------------------------ [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:31:20.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:33:20.889: INFO: Deleting pod "var-expansion-21f597d3-ef22-4b13-8c93-cb400e28f66c" in namespace "var-expansion-2205" Oct 5 10:33:20.897: INFO: Wait up to 5m0s for pod "var-expansion-21f597d3-ef22-4b13-8c93-cb400e28f66c" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:33:24.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2205" for this suite. • [SLOW TEST:124.178 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":303,"completed":112,"skipped":1913,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:33:24.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7698 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 5 10:33:24.994: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 5 10:33:25.172: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:33:27.180: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:33:29.180: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:33:31.181: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:33:33.180: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:33:35.181: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:33:37.180: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:33:39.180: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:33:41.181: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:33:43.179: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:33:45.180: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 10:33:47.191: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 5 10:33:47.199: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 5 10:33:53.290: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.40 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7698 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 10:33:53.290: INFO: >>> kubeConfig: /root/.kube/config I1005 10:33:53.399189 10 log.go:181] (0xb1c23f0) (0xb1c2460) Create stream I1005 10:33:53.399335 10 log.go:181] (0xb1c23f0) (0xb1c2460) Stream added, broadcasting: 1 I1005 10:33:53.403524 10 log.go:181] (0xb1c23f0) Reply frame received for 1 I1005 10:33:53.403749 10 log.go:181] (0xb1c23f0) (0xb1c2690) Create stream I1005 10:33:53.403860 10 log.go:181] (0xb1c23f0) (0xb1c2690) Stream added, broadcasting: 3 I1005 10:33:53.405521 10 log.go:181] (0xb1c23f0) Reply frame received for 3 I1005 10:33:53.405687 10 log.go:181] (0xb1c23f0) (0xb1c2850) Create stream I1005 10:33:53.405778 10 log.go:181] (0xb1c23f0) (0xb1c2850) Stream added, broadcasting: 5 I1005 10:33:53.407507 10 log.go:181] (0xb1c23f0) Reply frame received for 5 I1005 10:33:54.490928 10 log.go:181] (0xb1c23f0) Data frame received for 5 I1005 10:33:54.491191 10 log.go:181] (0xb1c2850) (5) Data frame handling I1005 10:33:54.491425 10 log.go:181] (0xb1c23f0) Data frame received for 3 I1005 10:33:54.491725 10 log.go:181] (0xb1c2690) (3) Data frame handling I1005 10:33:54.491995 10 log.go:181] (0xb1c2690) (3) Data frame sent I1005 10:33:54.492192 10 log.go:181] (0xb1c23f0) Data frame received for 3 I1005 10:33:54.492453 10 log.go:181] (0xb1c2690) (3) Data frame handling I1005 10:33:54.493176 10 log.go:181] (0xb1c23f0) Data frame received for 1 I1005 10:33:54.493261 10 log.go:181] (0xb1c2460) (1) Data frame handling I1005 10:33:54.493346 10 log.go:181] (0xb1c2460) (1) Data frame sent I1005 10:33:54.493436 10 log.go:181] (0xb1c23f0) (0xb1c2460) Stream removed, broadcasting: 1 I1005 10:33:54.493562 10 log.go:181] (0xb1c23f0) Go away received I1005 10:33:54.494236 10 log.go:181] (0xb1c23f0) (0xb1c2460) Stream removed, broadcasting: 1 I1005 10:33:54.494379 10 log.go:181] (0xb1c23f0) (0xb1c2690) Stream removed, broadcasting: 3 I1005 10:33:54.494503 10 log.go:181] (0xb1c23f0) (0xb1c2850) Stream removed, broadcasting: 5 Oct 5 10:33:54.494: INFO: Found all expected endpoints: [netserver-0] Oct 5 10:33:54.501: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.39 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7698 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 10:33:54.501: INFO: >>> kubeConfig: /root/.kube/config I1005 10:33:54.606798 10 log.go:181] (0xb1c2d90) (0xb1c2e00) Create stream I1005 10:33:54.606935 10 log.go:181] (0xb1c2d90) (0xb1c2e00) Stream added, broadcasting: 1 I1005 10:33:54.611427 10 log.go:181] (0xb1c2d90) Reply frame received for 1 I1005 10:33:54.611673 10 log.go:181] (0xb1c2d90) (0xaee3500) Create stream I1005 10:33:54.611838 10 log.go:181] (0xb1c2d90) (0xaee3500) Stream added, broadcasting: 3 I1005 10:33:54.613944 10 log.go:181] (0xb1c2d90) Reply frame received for 3 I1005 10:33:54.614164 10 log.go:181] (0xb1c2d90) (0xaee3ce0) Create stream I1005 10:33:54.614355 10 log.go:181] (0xb1c2d90) (0xaee3ce0) Stream added, broadcasting: 5 I1005 10:33:54.616349 10 log.go:181] (0xb1c2d90) Reply frame received for 5 I1005 10:33:55.675795 10 log.go:181] (0xb1c2d90) Data frame received for 3 I1005 10:33:55.675997 10 log.go:181] (0xaee3500) (3) Data frame handling I1005 10:33:55.676231 10 log.go:181] (0xb1c2d90) Data frame received for 5 I1005 10:33:55.676386 10 log.go:181] (0xaee3ce0) (5) Data frame handling I1005 10:33:55.676574 10 log.go:181] (0xaee3500) (3) Data frame sent I1005 10:33:55.676799 10 log.go:181] (0xb1c2d90) Data frame received for 3 I1005 10:33:55.677211 10 log.go:181] (0xaee3500) (3) Data frame handling I1005 10:33:55.677764 10 log.go:181] (0xb1c2d90) Data frame received for 1 I1005 10:33:55.677896 10 log.go:181] (0xb1c2e00) (1) Data frame handling I1005 10:33:55.678053 10 log.go:181] (0xb1c2e00) (1) Data frame sent I1005 10:33:55.678203 10 log.go:181] (0xb1c2d90) (0xb1c2e00) Stream removed, broadcasting: 1 I1005 10:33:55.678340 10 log.go:181] (0xb1c2d90) Go away received I1005 10:33:55.678762 10 log.go:181] (0xb1c2d90) (0xb1c2e00) Stream removed, broadcasting: 1 I1005 10:33:55.678920 10 log.go:181] (0xb1c2d90) (0xaee3500) Stream removed, broadcasting: 3 I1005 10:33:55.679030 10 log.go:181] (0xb1c2d90) (0xaee3ce0) Stream removed, broadcasting: 5 Oct 5 10:33:55.679: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:33:55.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7698" for this suite. • [SLOW TEST:30.769 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":113,"skipped":1914,"failed":0} SSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:33:55.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:33:55.753: INFO: Creating deployment "webserver-deployment" Oct 5 10:33:55.759: INFO: Waiting for observed generation 1 Oct 5 10:33:57.843: INFO: Waiting for all required pods to come up Oct 5 10:33:57.856: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Oct 5 10:34:09.877: INFO: Waiting for deployment "webserver-deployment" to complete Oct 5 10:34:09.890: INFO: Updating deployment "webserver-deployment" with a non-existent image Oct 5 10:34:09.904: INFO: Updating deployment webserver-deployment Oct 5 10:34:09.904: INFO: Waiting for observed generation 2 Oct 5 10:34:11.931: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Oct 5 10:34:11.936: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Oct 5 10:34:11.941: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 5 10:34:11.957: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Oct 5 10:34:11.958: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Oct 5 10:34:11.962: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Oct 5 10:34:11.992: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Oct 5 10:34:11.992: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Oct 5 10:34:12.006: INFO: Updating deployment webserver-deployment Oct 5 10:34:12.006: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Oct 5 10:34:12.027: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Oct 5 10:34:12.046: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 10:34:12.184: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5969 /apis/apps/v1/namespaces/deployment-5969/deployments/webserver-deployment 5a39f22c-07db-492f-955c-3c979ac58eac 3163265 3 2020-10-05 10:33:55 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9be2488 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2020-10-05 10:34:11 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-05 10:34:12 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Oct 5 10:34:12.253: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-5969 /apis/apps/v1/namespaces/deployment-5969/replicasets/webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 3163249 3 2020-10-05 10:34:09 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 5a39f22c-07db-492f-955c-3c979ac58eac 0x9be28a7 0x9be28a8}] [] [{kube-controller-manager Update apps/v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a39f22c-07db-492f-955c-3c979ac58eac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9be2928 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 10:34:12.253: INFO: All old ReplicaSets of Deployment "webserver-deployment": Oct 5 10:34:12.255: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-5969 /apis/apps/v1/namespaces/deployment-5969/replicasets/webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 3163306 3 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 5a39f22c-07db-492f-955c-3c979ac58eac 0x9be2987 0x9be2988}] [] [{kube-controller-manager Update apps/v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5a39f22c-07db-492f-955c-3c979ac58eac\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9be29f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Oct 5 10:34:12.369: INFO: Pod "webserver-deployment-795d758f88-68s5t" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-68s5t webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-68s5t 442e0674-7398-47e8-800e-a0ebc1a19712 3163298 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c161a7 0x9c161a8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.370: INFO: Pod "webserver-deployment-795d758f88-995pn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-995pn webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-995pn e8b3a940-4e55-4572-8a8c-b9fb0a438118 3163294 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16307 0x9c16308}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.371: INFO: Pod "webserver-deployment-795d758f88-gdwhj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gdwhj webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-gdwhj 4fce6d65-6b12-4a1d-b543-4e1e8a7edcf6 3163278 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16447 0x9c16448}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.373: INFO: Pod "webserver-deployment-795d758f88-ghz5n" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-ghz5n webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-ghz5n ed49aa29-1b70-4929-b119-9f17d20ca351 3163211 0 2020-10-05 10:34:09 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16587 0x9c16588}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-05 10:34:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.374: INFO: Pod "webserver-deployment-795d758f88-gqzts" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-gqzts webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-gqzts 7dfab6df-9c59-428d-9d0b-ff3ec234eee0 3163219 0 2020-10-05 10:34:09 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16757 0x9c16758}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-05 10:34:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.375: INFO: Pod "webserver-deployment-795d758f88-jdlpl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jdlpl webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-jdlpl 5fb47386-70ac-4e14-9baa-f626652d75a7 3163295 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16907 0x9c16908}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.377: INFO: Pod "webserver-deployment-795d758f88-k4plz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-k4plz webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-k4plz 82561903-baf0-4c5b-aba1-b3a71199d445 3163317 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16a47 0x9c16a48}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.378: INFO: Pod "webserver-deployment-795d758f88-pqbwc" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-pqbwc webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-pqbwc 1786e0dc-bbf1-4c31-a2f9-cb7ac1b9b30b 3163233 0 2020-10-05 10:34:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16b87 0x9c16b88}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-05 10:34:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.379: INFO: Pod "webserver-deployment-795d758f88-t2xpz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t2xpz webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-t2xpz 195a8da9-2135-4dc8-a071-ce3ec7ac30d7 3163292 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16d37 0x9c16d38}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.380: INFO: Pod "webserver-deployment-795d758f88-t9h76" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t9h76 webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-t9h76 89d72b70-d79b-4881-810f-e248b72ca74f 3163276 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16e77 0x9c16e78}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.386: INFO: Pod "webserver-deployment-795d758f88-wbt4t" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wbt4t webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-wbt4t 35a957ad-58c5-49fc-88cd-114f037e9fbf 3163235 0 2020-10-05 10:34:10 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c16fb7 0x9c16fb8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-05 10:34:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.387: INFO: Pod "webserver-deployment-795d758f88-x9wp7" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-x9wp7 webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-x9wp7 9b3692dd-2f53-452d-a709-dc6220cb41a5 3163267 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c17167 0x9c17168}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.389: INFO: Pod "webserver-deployment-795d758f88-zqbbt" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zqbbt webserver-deployment-795d758f88- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-795d758f88-zqbbt 9d3b4edd-4455-43d4-8fff-c544dd8695a3 3163208 0 2020-10-05 10:34:09 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 bdbf124f-05f2-4f03-823a-b2f55ab5ee96 0x9c172a7 0x9c172a8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bdbf124f-05f2-4f03-823a-b2f55ab5ee96\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-05 10:34:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.391: INFO: Pod "webserver-deployment-dd94f59b7-298mt" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-298mt webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-298mt 3f57abd5-60cf-4899-9f44-baa3efa8b5d1 3163144 0 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c17457 0x9c17458}] [] [{kube-controller-manager Update v1 2020-10-05 10:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.45\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.45,StartTime:2020-10-05 10:33:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:34:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e055439bf510f79ef48f1da96a4a091594293c5df8453c4c8377e048ff859694,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.393: INFO: Pod "webserver-deployment-dd94f59b7-5l2s7" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5l2s7 webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-5l2s7 6fdcb2dc-2333-433b-a82a-2e0429bb6d63 3163305 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c17607 0x9c17608}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.396: INFO: Pod "webserver-deployment-dd94f59b7-5ztcw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-5ztcw webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-5ztcw d5f84090-cfa4-4be0-bbf3-029551d36326 3163152 0 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c17737 0x9c17738}] [] [{kube-controller-manager Update v1 2020-10-05 10:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.43,StartTime:2020-10-05 10:33:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:34:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a331523e6d38a686ed3fc8da942df0c07a1c56f5a6286d8912d1b4db6eab3005,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.398: INFO: Pod "webserver-deployment-dd94f59b7-6n6cs" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6n6cs webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-6n6cs 1c4f0dd2-4d3d-4108-bd8a-4d1f1823f1df 3163156 0 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c178e7 0x9c178e8}] [] [{kube-controller-manager Update v1 2020-10-05 10:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.44,StartTime:2020-10-05 10:33:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:34:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c7b9f9f36f3357303f543fc1ad6c721b93fa29d4e3dd160aa2570bd84924f988,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.399: INFO: Pod "webserver-deployment-dd94f59b7-6v9qd" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-6v9qd webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-6v9qd d393b759-2010-4502-94b8-8e5932b4b771 3163286 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c17a97 0x9c17a98}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.400: INFO: Pod "webserver-deployment-dd94f59b7-8ngl2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-8ngl2 webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-8ngl2 85f796a0-4648-4a70-bb2b-23acebad94b2 3163288 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c17bc7 0x9c17bc8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.401: INFO: Pod "webserver-deployment-dd94f59b7-9xn7r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-9xn7r webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-9xn7r 8eeca5bc-3630-4a07-9d25-9ff21b703f0c 3163307 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c17cf7 0x9c17cf8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.403: INFO: Pod "webserver-deployment-dd94f59b7-c5pb2" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-c5pb2 webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-c5pb2 761e351c-d2af-4187-9a64-2905f00916da 3163293 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c17e27 0x9c17e28}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:,StartTime:2020-10-05 10:34:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.404: INFO: Pod "webserver-deployment-dd94f59b7-fjs2g" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-fjs2g webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-fjs2g 414e5ea4-5ac0-406e-9bc8-cc651b92ca48 3163268 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c17fb7 0x9c17fb8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.405: INFO: Pod "webserver-deployment-dd94f59b7-g4nc9" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-g4nc9 webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-g4nc9 2f014f14-89f1-479d-be30-c7650346e787 3163277 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c520e7 0x9c520e8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.406: INFO: Pod "webserver-deployment-dd94f59b7-hjdqt" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hjdqt webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-hjdqt 57cd49a6-ed23-4dc4-ad01-0aa94b5b9093 3163087 0 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c52217 0x9c52218}] [] [{kube-controller-manager Update v1 2020-10-05 10:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.41,StartTime:2020-10-05 10:33:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:34:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://77370cd2c257978ece9ef6aa1b79ab86cb3251b4d1ba0dac3843f5f06a757f64,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.408: INFO: Pod "webserver-deployment-dd94f59b7-jk8hm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-jk8hm webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-jk8hm 6e00ea1d-6724-4fe9-b509-7c351e6169a1 3163124 0 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c523c7 0x9c523c8}] [] [{kube-controller-manager Update v1 2020-10-05 10:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.42,StartTime:2020-10-05 10:33:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:34:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://87fca5941788ddf7d87ecc242a92fa6f5648b4c525a70b8c9085f4b5e3658330,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.409: INFO: Pod "webserver-deployment-dd94f59b7-k658p" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-k658p webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-k658p b5026f46-a051-4c19-b44c-9abdf704b57f 3163308 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c52597 0x9c52598}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.411: INFO: Pod "webserver-deployment-dd94f59b7-rsxpg" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rsxpg webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-rsxpg 2a1bc61e-8a47-4e6d-a2a5-c44d3f5ba5bc 3163142 0 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c526c7 0x9c526c8}] [] [{kube-controller-manager Update v1 2020-10-05 10:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.45\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.45,StartTime:2020-10-05 10:33:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:34:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4150f8c4a598d703ac15760e4f00f58465bfb25eb6ed694532c6bb22ba0775da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.412: INFO: Pod "webserver-deployment-dd94f59b7-t57dw" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t57dw webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-t57dw a93ebe96-ab32-4014-b078-2f8451003fff 3163107 0 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c52877 0x9c52878}] [] [{kube-controller-manager Update v1 2020-10-05 10:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.42,StartTime:2020-10-05 10:33:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:34:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://111843220268c542abe0449eae75b7e5123c6e99edb640a5071f716f38db9176,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.413: INFO: Pod "webserver-deployment-dd94f59b7-t6m7r" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t6m7r webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-t6m7r c275abf5-628c-47e8-a6d9-d3a7beca6f7d 3163279 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c52a37 0x9c52a38}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.414: INFO: Pod "webserver-deployment-dd94f59b7-t6nhm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-t6nhm webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-t6nhm 2c0025ba-1199-4d63-a2aa-0390527c4615 3163322 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c52b67 0x9c52b68}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-05 10:34:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.415: INFO: Pod "webserver-deployment-dd94f59b7-v7v8g" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-v7v8g webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-v7v8g f5016365-298d-4332-bf0b-52ce16e83dbe 3163134 0 2020-10-05 10:33:55 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c52cf7 0x9c52cf8}] [] [{kube-controller-manager Update v1 2020-10-05 10:33:55 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:34:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.41,StartTime:2020-10-05 10:33:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:34:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://81e86b927db2edaa0dad131fd9755d036784926d961419663a6d3d10bc92ee56,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.416: INFO: Pod "webserver-deployment-dd94f59b7-vzd8z" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vzd8z webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-vzd8z 54ce370b-c20f-4456-b2f8-fe7f6e5e97bd 3163302 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c52ea7 0x9c52ea8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:34:12.417: INFO: Pod "webserver-deployment-dd94f59b7-w76bq" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-w76bq webserver-deployment-dd94f59b7- deployment-5969 /api/v1/namespaces/deployment-5969/pods/webserver-deployment-dd94f59b7-w76bq 0f43f958-e7ea-4dc6-afa9-8bdba05c71b2 3163299 0 2020-10-05 10:34:12 +0000 UTC map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 6f519abd-2420-4c11-9241-f2e995f9d50d 0x9c52fd7 0x9c52fd8}] [] [{kube-controller-manager Update v1 2020-10-05 10:34:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6f519abd-2420-4c11-9241-f2e995f9d50d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75bqn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75bqn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75bqn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:34:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:34:12.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5969" for this suite. • [SLOW TEST:17.020 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":303,"completed":114,"skipped":1918,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:34:12.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 5 10:34:32.774: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:34:32.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2086" for this suite. • [SLOW TEST:20.197 seconds] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 blackbox test /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 on terminated container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:134 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":115,"skipped":1927,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:34:32.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:34:33.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5939" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":303,"completed":116,"skipped":1929,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:34:33.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Oct 5 10:34:47.744: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 10:34:47.863: INFO: Pod pod-with-prestop-http-hook still exists Oct 5 10:34:49.865: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 10:34:49.871: INFO: Pod pod-with-prestop-http-hook still exists Oct 5 10:34:51.864: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Oct 5 10:34:51.872: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:34:51.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9055" for this suite. • [SLOW TEST:18.533 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":303,"completed":117,"skipped":1936,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:34:51.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 5 10:34:56.571: INFO: Successfully updated pod "labelsupdate647f8974-eeab-44ab-bcf2-21463d367225" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:34:58.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9696" for this suite. • [SLOW TEST:6.712 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":118,"skipped":1960,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:34:58.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 10:34:58.721: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 10:34:58.785: INFO: Waiting for terminating namespaces to be deleted... Oct 5 10:34:58.791: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 5 10:34:58.799: INFO: pod-handle-http-request from container-lifecycle-hook-9055 started at 2020-10-05 10:34:33 +0000 UTC (1 container statuses recorded) Oct 5 10:34:58.799: INFO: Container pod-handle-http-request ready: true, restart count 0 Oct 5 10:34:58.799: INFO: labelsupdate647f8974-eeab-44ab-bcf2-21463d367225 from downward-api-9696 started at 2020-10-05 10:34:52 +0000 UTC (1 container statuses recorded) Oct 5 10:34:58.799: INFO: Container client-container ready: true, restart count 0 Oct 5 10:34:58.799: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:34:58.800: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 10:34:58.800: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:34:58.800: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 10:34:58.800: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 5 10:34:58.809: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:34:58.809: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 10:34:58.809: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:34:58.809: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.163b123219c35389], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.163b12321d6d644e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:34:59.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9386" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":303,"completed":119,"skipped":1961,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:34:59.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Oct 5 10:35:00.004: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:35:18.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2230" for this suite. • [SLOW TEST:18.271 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":303,"completed":120,"skipped":1984,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:35:18.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test substitution in volume subpath Oct 5 10:35:18.239: INFO: Waiting up to 5m0s for pod "var-expansion-a5a39d8f-cb5b-4488-a62c-5288903ed915" in namespace "var-expansion-7534" to be "Succeeded or Failed" Oct 5 10:35:18.248: INFO: Pod "var-expansion-a5a39d8f-cb5b-4488-a62c-5288903ed915": Phase="Pending", Reason="", readiness=false. Elapsed: 9.346081ms Oct 5 10:35:20.282: INFO: Pod "var-expansion-a5a39d8f-cb5b-4488-a62c-5288903ed915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042669648s Oct 5 10:35:22.288: INFO: Pod "var-expansion-a5a39d8f-cb5b-4488-a62c-5288903ed915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049365187s STEP: Saw pod success Oct 5 10:35:22.289: INFO: Pod "var-expansion-a5a39d8f-cb5b-4488-a62c-5288903ed915" satisfied condition "Succeeded or Failed" Oct 5 10:35:22.293: INFO: Trying to get logs from node kali-worker pod var-expansion-a5a39d8f-cb5b-4488-a62c-5288903ed915 container dapi-container: STEP: delete the pod Oct 5 10:35:22.337: INFO: Waiting for pod var-expansion-a5a39d8f-cb5b-4488-a62c-5288903ed915 to disappear Oct 5 10:35:22.502: INFO: Pod var-expansion-a5a39d8f-cb5b-4488-a62c-5288903ed915 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:35:22.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7534" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":303,"completed":121,"skipped":1999,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:35:22.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:35:32.274: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:35:34.295: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490932, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490932, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490932, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490932, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 10:35:36.303: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490932, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490932, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490932, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737490932, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:35:39.336: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:35:39.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3027" for this suite. STEP: Destroying namespace "webhook-3027-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.137 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":303,"completed":122,"skipped":2001,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:35:39.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-59bd66ad-7942-4d88-8792-0063b9d40719 STEP: Creating a pod to test consume secrets Oct 5 10:35:39.731: INFO: Waiting up to 5m0s for pod "pod-secrets-8277de62-04c9-4241-8741-01d56de22a7a" in namespace "secrets-5569" to be "Succeeded or Failed" Oct 5 10:35:39.767: INFO: Pod "pod-secrets-8277de62-04c9-4241-8741-01d56de22a7a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.28861ms Oct 5 10:35:41.815: INFO: Pod "pod-secrets-8277de62-04c9-4241-8741-01d56de22a7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.08312953s Oct 5 10:35:43.824: INFO: Pod "pod-secrets-8277de62-04c9-4241-8741-01d56de22a7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092074695s STEP: Saw pod success Oct 5 10:35:43.824: INFO: Pod "pod-secrets-8277de62-04c9-4241-8741-01d56de22a7a" satisfied condition "Succeeded or Failed" Oct 5 10:35:43.827: INFO: Trying to get logs from node kali-worker pod pod-secrets-8277de62-04c9-4241-8741-01d56de22a7a container secret-volume-test: STEP: delete the pod Oct 5 10:35:43.886: INFO: Waiting for pod pod-secrets-8277de62-04c9-4241-8741-01d56de22a7a to disappear Oct 5 10:35:43.895: INFO: Pod pod-secrets-8277de62-04c9-4241-8741-01d56de22a7a no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:35:43.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5569" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":123,"skipped":2007,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:35:43.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:35:43.969: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4ee3fbe1-21bd-49da-b111-7f0f4d9f281f" in namespace "projected-1109" to be "Succeeded or Failed" Oct 5 10:35:43.980: INFO: Pod "downwardapi-volume-4ee3fbe1-21bd-49da-b111-7f0f4d9f281f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.095682ms Oct 5 10:35:45.988: INFO: Pod "downwardapi-volume-4ee3fbe1-21bd-49da-b111-7f0f4d9f281f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018552268s Oct 5 10:35:48.001: INFO: Pod "downwardapi-volume-4ee3fbe1-21bd-49da-b111-7f0f4d9f281f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031390703s STEP: Saw pod success Oct 5 10:35:48.001: INFO: Pod "downwardapi-volume-4ee3fbe1-21bd-49da-b111-7f0f4d9f281f" satisfied condition "Succeeded or Failed" Oct 5 10:35:48.006: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-4ee3fbe1-21bd-49da-b111-7f0f4d9f281f container client-container: STEP: delete the pod Oct 5 10:35:48.035: INFO: Waiting for pod downwardapi-volume-4ee3fbe1-21bd-49da-b111-7f0f4d9f281f to disappear Oct 5 10:35:48.052: INFO: Pod downwardapi-volume-4ee3fbe1-21bd-49da-b111-7f0f4d9f281f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:35:48.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1109" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":303,"completed":124,"skipped":2101,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:35:48.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 5 10:35:48.166: INFO: namespace kubectl-735 Oct 5 10:35:48.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-735' Oct 5 10:35:50.449: INFO: stderr: "" Oct 5 10:35:50.449: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 5 10:35:51.459: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:35:51.460: INFO: Found 0 / 1 Oct 5 10:35:52.558: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:35:52.558: INFO: Found 0 / 1 Oct 5 10:35:53.461: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:35:53.462: INFO: Found 1 / 1 Oct 5 10:35:53.462: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 5 10:35:53.491: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:35:53.491: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 5 10:35:53.492: INFO: wait on agnhost-primary startup in kubectl-735 Oct 5 10:35:53.492: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config logs agnhost-primary-cjxx6 agnhost-primary --namespace=kubectl-735' Oct 5 10:35:54.740: INFO: stderr: "" Oct 5 10:35:54.740: INFO: stdout: "Paused\n" STEP: exposing RC Oct 5 10:35:54.741: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-735' Oct 5 10:35:56.080: INFO: stderr: "" Oct 5 10:35:56.080: INFO: stdout: "service/rm2 exposed\n" Oct 5 10:35:56.086: INFO: Service rm2 in namespace kubectl-735 found. STEP: exposing service Oct 5 10:35:58.098: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-735' Oct 5 10:35:59.582: INFO: stderr: "" Oct 5 10:35:59.583: INFO: stdout: "service/rm3 exposed\n" Oct 5 10:35:59.610: INFO: Service rm3 in namespace kubectl-735 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:36:01.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-735" for this suite. • [SLOW TEST:13.567 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1246 should create services for rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":303,"completed":125,"skipped":2113,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:36:01.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 5 10:36:06.274: INFO: Successfully updated pod "pod-update-fe298b87-8111-474f-9b74-be9d9c9299ab" STEP: verifying the updated pod is in kubernetes Oct 5 10:36:06.287: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:36:06.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5017" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":303,"completed":126,"skipped":2129,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:36:06.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 10:36:06.360: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 10:36:06.396: INFO: Waiting for terminating namespaces to be deleted... Oct 5 10:36:06.417: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 5 10:36:06.435: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:36:06.435: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 10:36:06.435: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:36:06.435: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 10:36:06.436: INFO: agnhost-primary-cjxx6 from kubectl-735 started at 2020-10-05 10:35:50 +0000 UTC (1 container statuses recorded) Oct 5 10:36:06.436: INFO: Container agnhost-primary ready: true, restart count 0 Oct 5 10:36:06.436: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 5 10:36:06.443: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:36:06.443: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 10:36:06.443: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 10:36:06.443: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 10:36:06.443: INFO: pod-update-fe298b87-8111-474f-9b74-be9d9c9299ab from pods-5017 started at 2020-10-05 10:36:01 +0000 UTC (1 container statuses recorded) Oct 5 10:36:06.444: INFO: Container nginx ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-17d76266-6173-4935-8fd8-a461d9ec09bc 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-17d76266-6173-4935-8fd8-a461d9ec09bc off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-17d76266-6173-4935-8fd8-a461d9ec09bc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:41:16.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7005" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:310.411 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":303,"completed":127,"skipped":2142,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:41:16.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 5 10:41:16.829: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:41:26.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-534" for this suite. • [SLOW TEST:9.956 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":303,"completed":128,"skipped":2143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:41:26.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:41:26.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1707" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":303,"completed":129,"skipped":2203,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:41:26.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating Agnhost RC Oct 5 10:41:26.942: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2748' Oct 5 10:41:34.310: INFO: stderr: "" Oct 5 10:41:34.310: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 5 10:41:35.452: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:41:35.452: INFO: Found 0 / 1 Oct 5 10:41:36.412: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:41:36.412: INFO: Found 0 / 1 Oct 5 10:41:37.337: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:41:37.337: INFO: Found 0 / 1 Oct 5 10:41:38.319: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:41:38.319: INFO: Found 1 / 1 Oct 5 10:41:38.319: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Oct 5 10:41:38.324: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:41:38.324: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 5 10:41:38.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config patch pod agnhost-primary-9dhv2 --namespace=kubectl-2748 -p {"metadata":{"annotations":{"x":"y"}}}' Oct 5 10:41:39.510: INFO: stderr: "" Oct 5 10:41:39.510: INFO: stdout: "pod/agnhost-primary-9dhv2 patched\n" STEP: checking annotations Oct 5 10:41:39.517: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 10:41:39.517: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:41:39.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2748" for this suite. • [SLOW TEST:12.645 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1490 should add annotations for pods in rc [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":303,"completed":130,"skipped":2214,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:41:39.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:41:44.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9250" for this suite. • [SLOW TEST:5.053 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":303,"completed":131,"skipped":2216,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:41:44.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Starting the proxy Oct 5 10:41:44.701: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix871776050/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:41:45.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3690" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":303,"completed":132,"skipped":2238,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:41:45.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:41:45.821: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee272f20-09e1-4d94-a5e5-ecd618b864ca" in namespace "downward-api-3895" to be "Succeeded or Failed" Oct 5 10:41:45.842: INFO: Pod "downwardapi-volume-ee272f20-09e1-4d94-a5e5-ecd618b864ca": Phase="Pending", Reason="", readiness=false. Elapsed: 21.293925ms Oct 5 10:41:47.858: INFO: Pod "downwardapi-volume-ee272f20-09e1-4d94-a5e5-ecd618b864ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036981188s Oct 5 10:41:49.871: INFO: Pod "downwardapi-volume-ee272f20-09e1-4d94-a5e5-ecd618b864ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050118128s STEP: Saw pod success Oct 5 10:41:49.871: INFO: Pod "downwardapi-volume-ee272f20-09e1-4d94-a5e5-ecd618b864ca" satisfied condition "Succeeded or Failed" Oct 5 10:41:49.878: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ee272f20-09e1-4d94-a5e5-ecd618b864ca container client-container: STEP: delete the pod Oct 5 10:41:49.998: INFO: Waiting for pod downwardapi-volume-ee272f20-09e1-4d94-a5e5-ecd618b864ca to disappear Oct 5 10:41:50.002: INFO: Pod downwardapi-volume-ee272f20-09e1-4d94-a5e5-ecd618b864ca no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:41:50.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3895" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":303,"completed":133,"skipped":2260,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:41:50.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 5 10:41:54.237: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:41:54.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6716" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":134,"skipped":2275,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:41:54.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Oct 5 10:41:55.253: INFO: Pod name wrapped-volume-race-549b321b-5c19-4956-ae0a-ff60f7aefbc8: Found 0 pods out of 5 Oct 5 10:42:00.278: INFO: Pod name wrapped-volume-race-549b321b-5c19-4956-ae0a-ff60f7aefbc8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-549b321b-5c19-4956-ae0a-ff60f7aefbc8 in namespace emptydir-wrapper-9792, will wait for the garbage collector to delete the pods Oct 5 10:42:14.417: INFO: Deleting ReplicationController wrapped-volume-race-549b321b-5c19-4956-ae0a-ff60f7aefbc8 took: 8.731384ms Oct 5 10:42:14.918: INFO: Terminating ReplicationController wrapped-volume-race-549b321b-5c19-4956-ae0a-ff60f7aefbc8 pods took: 501.152155ms STEP: Creating RC which spawns configmap-volume pods Oct 5 10:42:29.096: INFO: Pod name wrapped-volume-race-869ffac2-85b8-4d84-96af-a74c7f4df16d: Found 0 pods out of 5 Oct 5 10:42:34.115: INFO: Pod name wrapped-volume-race-869ffac2-85b8-4d84-96af-a74c7f4df16d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-869ffac2-85b8-4d84-96af-a74c7f4df16d in namespace emptydir-wrapper-9792, will wait for the garbage collector to delete the pods Oct 5 10:42:48.261: INFO: Deleting ReplicationController wrapped-volume-race-869ffac2-85b8-4d84-96af-a74c7f4df16d took: 10.231705ms Oct 5 10:42:48.762: INFO: Terminating ReplicationController wrapped-volume-race-869ffac2-85b8-4d84-96af-a74c7f4df16d pods took: 501.346257ms STEP: Creating RC which spawns configmap-volume pods Oct 5 10:42:59.004: INFO: Pod name wrapped-volume-race-30d656f1-1ca1-4578-a956-99a8f810bb6c: Found 0 pods out of 5 Oct 5 10:43:04.027: INFO: Pod name wrapped-volume-race-30d656f1-1ca1-4578-a956-99a8f810bb6c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-30d656f1-1ca1-4578-a956-99a8f810bb6c in namespace emptydir-wrapper-9792, will wait for the garbage collector to delete the pods Oct 5 10:43:18.148: INFO: Deleting ReplicationController wrapped-volume-race-30d656f1-1ca1-4578-a956-99a8f810bb6c took: 11.980771ms Oct 5 10:43:18.649: INFO: Terminating ReplicationController wrapped-volume-race-30d656f1-1ca1-4578-a956-99a8f810bb6c pods took: 501.117247ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:43:28.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9792" for this suite. • [SLOW TEST:94.634 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":303,"completed":135,"skipped":2296,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:43:28.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override all Oct 5 10:43:29.042: INFO: Waiting up to 5m0s for pod "client-containers-3a33a231-a6fb-4565-b3ac-617c8f29d258" in namespace "containers-7035" to be "Succeeded or Failed" Oct 5 10:43:29.063: INFO: Pod "client-containers-3a33a231-a6fb-4565-b3ac-617c8f29d258": Phase="Pending", Reason="", readiness=false. Elapsed: 20.833301ms Oct 5 10:43:31.071: INFO: Pod "client-containers-3a33a231-a6fb-4565-b3ac-617c8f29d258": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028872337s Oct 5 10:43:33.079: INFO: Pod "client-containers-3a33a231-a6fb-4565-b3ac-617c8f29d258": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037263578s STEP: Saw pod success Oct 5 10:43:33.080: INFO: Pod "client-containers-3a33a231-a6fb-4565-b3ac-617c8f29d258" satisfied condition "Succeeded or Failed" Oct 5 10:43:33.086: INFO: Trying to get logs from node kali-worker2 pod client-containers-3a33a231-a6fb-4565-b3ac-617c8f29d258 container test-container: STEP: delete the pod Oct 5 10:43:33.208: INFO: Waiting for pod client-containers-3a33a231-a6fb-4565-b3ac-617c8f29d258 to disappear Oct 5 10:43:33.260: INFO: Pod client-containers-3a33a231-a6fb-4565-b3ac-617c8f29d258 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:43:33.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7035" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":303,"completed":136,"skipped":2298,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:43:33.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d4dbb9aa-2e46-4621-8289-1037441beb40 STEP: Creating a pod to test consume secrets Oct 5 10:43:33.562: INFO: Waiting up to 5m0s for pod "pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f" in namespace "secrets-2990" to be "Succeeded or Failed" Oct 5 10:43:33.669: INFO: Pod "pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f": Phase="Pending", Reason="", readiness=false. Elapsed: 107.058831ms Oct 5 10:43:35.773: INFO: Pod "pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.211017736s Oct 5 10:43:37.789: INFO: Pod "pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f": Phase="Running", Reason="", readiness=true. Elapsed: 4.226749628s Oct 5 10:43:39.797: INFO: Pod "pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.235199153s STEP: Saw pod success Oct 5 10:43:39.797: INFO: Pod "pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f" satisfied condition "Succeeded or Failed" Oct 5 10:43:39.801: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f container secret-volume-test: STEP: delete the pod Oct 5 10:43:39.845: INFO: Waiting for pod pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f to disappear Oct 5 10:43:39.865: INFO: Pod pod-secrets-bd3e7cec-c2ea-4fe1-ad2b-b86e14b30f4f no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:43:39.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2990" for this suite. STEP: Destroying namespace "secret-namespace-4766" for this suite. • [SLOW TEST:6.596 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":303,"completed":137,"skipped":2301,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:43:39.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:43:55.689: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:43:57.713: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491435, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491435, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491435, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491435, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:44:00.750: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:44:00.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2575" for this suite. STEP: Destroying namespace "webhook-2575-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.013 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":303,"completed":138,"skipped":2308,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:44:00.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1581 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 5 10:44:01.020: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7832' Oct 5 10:44:02.282: INFO: stderr: "" Oct 5 10:44:02.282: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Oct 5 10:44:07.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7832 -o json' Oct 5 10:44:08.585: INFO: stderr: "" Oct 5 10:44:08.585: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-05T10:44:02Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-05T10:44:02Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.1.75\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-05T10:44:04Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7832\",\n \"resourceVersion\": \"3166940\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7832/pods/e2e-test-httpd-pod\",\n \"uid\": \"6e31fdd4-1ba8-487a-b97a-4b9e5421e1d4\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-d6q9z\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-d6q9z\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-d6q9z\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T10:44:02Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T10:44:04Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T10:44:04Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T10:44:02Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://836e4e0f287b22b8e03a624a98a619a3681995006d0d0ca7b8f8a2fd4017a2d5\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-10-05T10:44:04Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.13\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.75\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.75\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-05T10:44:02Z\"\n }\n}\n" STEP: replace the image in the pod Oct 5 10:44:08.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7832' Oct 5 10:44:11.113: INFO: stderr: "" Oct 5 10:44:11.113: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1586 Oct 5 10:44:11.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7832' Oct 5 10:44:15.015: INFO: stderr: "" Oct 5 10:44:15.015: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:44:15.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7832" for this suite. • [SLOW TEST:14.132 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 should update a single-container pod's image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":303,"completed":139,"skipped":2319,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:44:15.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:45:15.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3449" for this suite. • [SLOW TEST:60.095 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":303,"completed":140,"skipped":2323,"failed":0} SSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:45:15.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-e9700c8c-7e2e-4159-a07e-e2533075b3fc STEP: Creating a pod to test consume secrets Oct 5 10:45:15.202: INFO: Waiting up to 5m0s for pod "pod-secrets-109ef542-7b6c-4524-acba-80cf8d6a16d5" in namespace "secrets-1388" to be "Succeeded or Failed" Oct 5 10:45:15.219: INFO: Pod "pod-secrets-109ef542-7b6c-4524-acba-80cf8d6a16d5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.512588ms Oct 5 10:45:17.227: INFO: Pod "pod-secrets-109ef542-7b6c-4524-acba-80cf8d6a16d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024578767s Oct 5 10:45:19.235: INFO: Pod "pod-secrets-109ef542-7b6c-4524-acba-80cf8d6a16d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032399511s STEP: Saw pod success Oct 5 10:45:19.235: INFO: Pod "pod-secrets-109ef542-7b6c-4524-acba-80cf8d6a16d5" satisfied condition "Succeeded or Failed" Oct 5 10:45:19.240: INFO: Trying to get logs from node kali-worker pod pod-secrets-109ef542-7b6c-4524-acba-80cf8d6a16d5 container secret-volume-test: STEP: delete the pod Oct 5 10:45:19.421: INFO: Waiting for pod pod-secrets-109ef542-7b6c-4524-acba-80cf8d6a16d5 to disappear Oct 5 10:45:19.431: INFO: Pod pod-secrets-109ef542-7b6c-4524-acba-80cf8d6a16d5 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:45:19.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1388" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":141,"skipped":2328,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:45:19.470: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:45:32.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1721" for this suite. • [SLOW TEST:13.280 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":303,"completed":142,"skipped":2337,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:45:32.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:45:40.639: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:45:42.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491540, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491540, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491540, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491540, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 10:45:44.726: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491540, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491540, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491540, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491540, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:45:47.789: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:45:48.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2896" for this suite. STEP: Destroying namespace "webhook-2896-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.784 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":303,"completed":143,"skipped":2340,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:45:48.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:45:52.120: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:45:54.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491552, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491552, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491552, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491552, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:45:57.463: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:45:57.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1088" for this suite. STEP: Destroying namespace "webhook-1088-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.585 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":303,"completed":144,"skipped":2353,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:45:58.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if v1 is in available api versions [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating api versions Oct 5 10:45:58.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config api-versions' Oct 5 10:45:59.460: INFO: stderr: "" Oct 5 10:45:59.461: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:45:59.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1017" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":303,"completed":145,"skipped":2355,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:45:59.485: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 5 10:45:59.570: INFO: Waiting up to 1m0s for all nodes to be ready Oct 5 10:46:59.643: INFO: Waiting for terminating namespaces to be deleted... [It] validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 5 10:46:59.719: INFO: Created pod: pod0-sched-preemption-low-priority Oct 5 10:46:59.770: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a critical pod that use same resources as that of a lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:47:24.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-380" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:85.113 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates lower priority pod preemption by critical pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":303,"completed":146,"skipped":2361,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:47:24.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1005 10:47:25.652623 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 10:48:27.681: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:48:27.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3017" for this suite. • [SLOW TEST:63.096 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":303,"completed":147,"skipped":2376,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:48:27.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:48:27.753: INFO: Creating ReplicaSet my-hostname-basic-edc8951f-d4c8-488a-994a-2bc2c3568f1c Oct 5 10:48:27.808: INFO: Pod name my-hostname-basic-edc8951f-d4c8-488a-994a-2bc2c3568f1c: Found 0 pods out of 1 Oct 5 10:48:32.817: INFO: Pod name my-hostname-basic-edc8951f-d4c8-488a-994a-2bc2c3568f1c: Found 1 pods out of 1 Oct 5 10:48:32.817: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-edc8951f-d4c8-488a-994a-2bc2c3568f1c" is running Oct 5 10:48:32.823: INFO: Pod "my-hostname-basic-edc8951f-d4c8-488a-994a-2bc2c3568f1c-459l2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 10:48:27 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 10:48:31 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 10:48:31 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 10:48:27 +0000 UTC Reason: Message:}]) Oct 5 10:48:32.826: INFO: Trying to dial the pod Oct 5 10:48:37.847: INFO: Controller my-hostname-basic-edc8951f-d4c8-488a-994a-2bc2c3568f1c: Got expected result from replica 1 [my-hostname-basic-edc8951f-d4c8-488a-994a-2bc2c3568f1c-459l2]: "my-hostname-basic-edc8951f-d4c8-488a-994a-2bc2c3568f1c-459l2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:48:37.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6157" for this suite. • [SLOW TEST:10.165 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":148,"skipped":2389,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:48:37.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:48:37.931: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b05d2d55-5d1a-4178-ae56-e3323c81f14a" in namespace "downward-api-7845" to be "Succeeded or Failed" Oct 5 10:48:37.968: INFO: Pod "downwardapi-volume-b05d2d55-5d1a-4178-ae56-e3323c81f14a": Phase="Pending", Reason="", readiness=false. Elapsed: 36.464192ms Oct 5 10:48:40.023: INFO: Pod "downwardapi-volume-b05d2d55-5d1a-4178-ae56-e3323c81f14a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091067252s Oct 5 10:48:42.032: INFO: Pod "downwardapi-volume-b05d2d55-5d1a-4178-ae56-e3323c81f14a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100159012s STEP: Saw pod success Oct 5 10:48:42.032: INFO: Pod "downwardapi-volume-b05d2d55-5d1a-4178-ae56-e3323c81f14a" satisfied condition "Succeeded or Failed" Oct 5 10:48:42.038: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b05d2d55-5d1a-4178-ae56-e3323c81f14a container client-container: STEP: delete the pod Oct 5 10:48:42.088: INFO: Waiting for pod downwardapi-volume-b05d2d55-5d1a-4178-ae56-e3323c81f14a to disappear Oct 5 10:48:42.096: INFO: Pod downwardapi-volume-b05d2d55-5d1a-4178-ae56-e3323c81f14a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:48:42.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7845" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":303,"completed":149,"skipped":2402,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:48:42.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-3848 [It] should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating statefulset ss in namespace statefulset-3848 Oct 5 10:48:42.265: INFO: Found 0 stateful pods, waiting for 1 Oct 5 10:48:52.275: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 10:48:52.308: INFO: Deleting all statefulset in ns statefulset-3848 Oct 5 10:48:52.350: INFO: Scaling statefulset ss to 0 Oct 5 10:49:12.481: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 10:49:12.486: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:49:12.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3848" for this suite. • [SLOW TEST:30.425 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have a working scale subresource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":303,"completed":150,"skipped":2408,"failed":0} SS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:49:12.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test override arguments Oct 5 10:49:12.619: INFO: Waiting up to 5m0s for pod "client-containers-e06d387f-1e63-45d1-8862-f8c4ee2755ea" in namespace "containers-6166" to be "Succeeded or Failed" Oct 5 10:49:12.659: INFO: Pod "client-containers-e06d387f-1e63-45d1-8862-f8c4ee2755ea": Phase="Pending", Reason="", readiness=false. Elapsed: 39.994447ms Oct 5 10:49:14.667: INFO: Pod "client-containers-e06d387f-1e63-45d1-8862-f8c4ee2755ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047486748s Oct 5 10:49:16.674: INFO: Pod "client-containers-e06d387f-1e63-45d1-8862-f8c4ee2755ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054642887s STEP: Saw pod success Oct 5 10:49:16.674: INFO: Pod "client-containers-e06d387f-1e63-45d1-8862-f8c4ee2755ea" satisfied condition "Succeeded or Failed" Oct 5 10:49:16.679: INFO: Trying to get logs from node kali-worker pod client-containers-e06d387f-1e63-45d1-8862-f8c4ee2755ea container test-container: STEP: delete the pod Oct 5 10:49:16.801: INFO: Waiting for pod client-containers-e06d387f-1e63-45d1-8862-f8c4ee2755ea to disappear Oct 5 10:49:16.835: INFO: Pod client-containers-e06d387f-1e63-45d1-8862-f8c4ee2755ea no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:49:16.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6166" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":303,"completed":151,"skipped":2410,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:49:16.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 5 10:49:16.958: INFO: Waiting up to 5m0s for pod "pod-08686d19-616e-49ed-9023-fb83770e83b4" in namespace "emptydir-6361" to be "Succeeded or Failed" Oct 5 10:49:16.981: INFO: Pod "pod-08686d19-616e-49ed-9023-fb83770e83b4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.857933ms Oct 5 10:49:18.989: INFO: Pod "pod-08686d19-616e-49ed-9023-fb83770e83b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03111253s Oct 5 10:49:20.996: INFO: Pod "pod-08686d19-616e-49ed-9023-fb83770e83b4": Phase="Running", Reason="", readiness=true. Elapsed: 4.038042789s Oct 5 10:49:23.003: INFO: Pod "pod-08686d19-616e-49ed-9023-fb83770e83b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044367949s STEP: Saw pod success Oct 5 10:49:23.003: INFO: Pod "pod-08686d19-616e-49ed-9023-fb83770e83b4" satisfied condition "Succeeded or Failed" Oct 5 10:49:23.009: INFO: Trying to get logs from node kali-worker pod pod-08686d19-616e-49ed-9023-fb83770e83b4 container test-container: STEP: delete the pod Oct 5 10:49:23.046: INFO: Waiting for pod pod-08686d19-616e-49ed-9023-fb83770e83b4 to disappear Oct 5 10:49:23.055: INFO: Pod pod-08686d19-616e-49ed-9023-fb83770e83b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:49:23.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6361" for this suite. • [SLOW TEST:6.208 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":152,"skipped":2426,"failed":0} SS ------------------------------ [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:49:23.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4044 Oct 5 10:49:27.200: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4044 kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Oct 5 10:49:28.702: INFO: stderr: "I1005 10:49:28.566628 2688 log.go:181] (0x287efc0) (0x287f030) Create stream\nI1005 10:49:28.568441 2688 log.go:181] (0x287efc0) (0x287f030) Stream added, broadcasting: 1\nI1005 10:49:28.580981 2688 log.go:181] (0x287efc0) Reply frame received for 1\nI1005 10:49:28.582049 2688 log.go:181] (0x287efc0) (0x287f1f0) Create stream\nI1005 10:49:28.582182 2688 log.go:181] (0x287efc0) (0x287f1f0) Stream added, broadcasting: 3\nI1005 10:49:28.584126 2688 log.go:181] (0x287efc0) Reply frame received for 3\nI1005 10:49:28.584405 2688 log.go:181] (0x287efc0) (0x24aa540) Create stream\nI1005 10:49:28.584485 2688 log.go:181] (0x287efc0) (0x24aa540) Stream added, broadcasting: 5\nI1005 10:49:28.585915 2688 log.go:181] (0x287efc0) Reply frame received for 5\nI1005 10:49:28.678614 2688 log.go:181] (0x287efc0) Data frame received for 5\nI1005 10:49:28.678847 2688 log.go:181] (0x24aa540) (5) Data frame handling\nI1005 10:49:28.679203 2688 log.go:181] (0x24aa540) (5) Data frame sent\n+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\nI1005 10:49:28.684770 2688 log.go:181] (0x287efc0) Data frame received for 3\nI1005 10:49:28.684956 2688 log.go:181] (0x287f1f0) (3) Data frame handling\nI1005 10:49:28.685148 2688 log.go:181] (0x287efc0) Data frame received for 5\nI1005 10:49:28.685415 2688 log.go:181] (0x24aa540) (5) Data frame handling\nI1005 10:49:28.685645 2688 log.go:181] (0x287f1f0) (3) Data frame sent\nI1005 10:49:28.685753 2688 log.go:181] (0x287efc0) Data frame received for 3\nI1005 10:49:28.685884 2688 log.go:181] (0x287f1f0) (3) Data frame handling\nI1005 10:49:28.686559 2688 log.go:181] (0x287efc0) Data frame received for 1\nI1005 10:49:28.686741 2688 log.go:181] (0x287f030) (1) Data frame handling\nI1005 10:49:28.686973 2688 log.go:181] (0x287f030) (1) Data frame sent\nI1005 10:49:28.688143 2688 log.go:181] (0x287efc0) (0x287f030) Stream removed, broadcasting: 1\nI1005 10:49:28.690310 2688 log.go:181] (0x287efc0) Go away received\nI1005 10:49:28.693489 2688 log.go:181] (0x287efc0) (0x287f030) Stream removed, broadcasting: 1\nI1005 10:49:28.693744 2688 log.go:181] (0x287efc0) (0x287f1f0) Stream removed, broadcasting: 3\nI1005 10:49:28.693907 2688 log.go:181] (0x287efc0) (0x24aa540) Stream removed, broadcasting: 5\n" Oct 5 10:49:28.702: INFO: stdout: "iptables" Oct 5 10:49:28.703: INFO: proxyMode: iptables Oct 5 10:49:28.711: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 10:49:28.830: INFO: Pod kube-proxy-mode-detector still exists Oct 5 10:49:30.831: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 10:49:30.838: INFO: Pod kube-proxy-mode-detector still exists Oct 5 10:49:32.831: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 10:49:32.839: INFO: Pod kube-proxy-mode-detector still exists Oct 5 10:49:34.831: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 10:49:34.837: INFO: Pod kube-proxy-mode-detector still exists Oct 5 10:49:36.831: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 10:49:36.838: INFO: Pod kube-proxy-mode-detector still exists Oct 5 10:49:38.831: INFO: Waiting for pod kube-proxy-mode-detector to disappear Oct 5 10:49:38.837: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-4044 STEP: creating replication controller affinity-nodeport-timeout in namespace services-4044 I1005 10:49:38.921103 10 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4044, replica count: 3 I1005 10:49:41.972730 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 10:49:44.973827 10 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 10:49:44.994: INFO: Creating new exec pod Oct 5 10:49:50.030: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpod-affinityzhjsn -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Oct 5 10:49:51.523: INFO: stderr: "I1005 10:49:51.401271 2708 log.go:181] (0x296e000) (0x296e070) Create stream\nI1005 10:49:51.405503 2708 log.go:181] (0x296e000) (0x296e070) Stream added, broadcasting: 1\nI1005 10:49:51.426058 2708 log.go:181] (0x296e000) Reply frame received for 1\nI1005 10:49:51.426553 2708 log.go:181] (0x296e000) (0x2d14070) Create stream\nI1005 10:49:51.426620 2708 log.go:181] (0x296e000) (0x2d14070) Stream added, broadcasting: 3\nI1005 10:49:51.427862 2708 log.go:181] (0x296e000) Reply frame received for 3\nI1005 10:49:51.428120 2708 log.go:181] (0x296e000) (0x2d142a0) Create stream\nI1005 10:49:51.428182 2708 log.go:181] (0x296e000) (0x2d142a0) Stream added, broadcasting: 5\nI1005 10:49:51.429301 2708 log.go:181] (0x296e000) Reply frame received for 5\nI1005 10:49:51.503471 2708 log.go:181] (0x296e000) Data frame received for 5\nI1005 10:49:51.503757 2708 log.go:181] (0x2d142a0) (5) Data frame handling\nI1005 10:49:51.503965 2708 log.go:181] (0x296e000) Data frame received for 3\nI1005 10:49:51.504123 2708 log.go:181] (0x2d14070) (3) Data frame handling\nI1005 10:49:51.504226 2708 log.go:181] (0x2d142a0) (5) Data frame sent\nI1005 10:49:51.504497 2708 log.go:181] (0x296e000) Data frame received for 5\nI1005 10:49:51.504615 2708 log.go:181] (0x2d142a0) (5) Data frame handling\n+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\nI1005 10:49:51.506970 2708 log.go:181] (0x296e000) Data frame received for 1\nI1005 10:49:51.507122 2708 log.go:181] (0x296e070) (1) Data frame handling\nI1005 10:49:51.507234 2708 log.go:181] (0x2d142a0) (5) Data frame sent\nI1005 10:49:51.507401 2708 log.go:181] (0x296e000) Data frame received for 5\nI1005 10:49:51.507530 2708 log.go:181] (0x2d142a0) (5) Data frame handling\nI1005 10:49:51.507705 2708 log.go:181] (0x296e070) (1) Data frame sent\nI1005 10:49:51.508737 2708 log.go:181] (0x296e000) (0x296e070) Stream removed, broadcasting: 1\nI1005 10:49:51.510794 2708 log.go:181] (0x296e000) Go away received\nI1005 10:49:51.514266 2708 log.go:181] (0x296e000) (0x296e070) Stream removed, broadcasting: 1\nI1005 10:49:51.514509 2708 log.go:181] (0x296e000) (0x2d14070) Stream removed, broadcasting: 3\nI1005 10:49:51.514704 2708 log.go:181] (0x296e000) (0x2d142a0) Stream removed, broadcasting: 5\n" Oct 5 10:49:51.524: INFO: stdout: "" Oct 5 10:49:51.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpod-affinityzhjsn -- /bin/sh -x -c nc -zv -t -w 2 10.102.77.251 80' Oct 5 10:49:53.074: INFO: stderr: "I1005 10:49:52.946788 2728 log.go:181] (0x30820e0) (0x3082150) Create stream\nI1005 10:49:52.949242 2728 log.go:181] (0x30820e0) (0x3082150) Stream added, broadcasting: 1\nI1005 10:49:52.960339 2728 log.go:181] (0x30820e0) Reply frame received for 1\nI1005 10:49:52.961140 2728 log.go:181] (0x30820e0) (0x25ca070) Create stream\nI1005 10:49:52.961237 2728 log.go:181] (0x30820e0) (0x25ca070) Stream added, broadcasting: 3\nI1005 10:49:52.962807 2728 log.go:181] (0x30820e0) Reply frame received for 3\nI1005 10:49:52.963016 2728 log.go:181] (0x30820e0) (0x3082310) Create stream\nI1005 10:49:52.963072 2728 log.go:181] (0x30820e0) (0x3082310) Stream added, broadcasting: 5\nI1005 10:49:52.964434 2728 log.go:181] (0x30820e0) Reply frame received for 5\nI1005 10:49:53.056076 2728 log.go:181] (0x30820e0) Data frame received for 3\nI1005 10:49:53.056399 2728 log.go:181] (0x30820e0) Data frame received for 5\nI1005 10:49:53.056558 2728 log.go:181] (0x25ca070) (3) Data frame handling\nI1005 10:49:53.056820 2728 log.go:181] (0x30820e0) Data frame received for 1\nI1005 10:49:53.057116 2728 log.go:181] (0x3082150) (1) Data frame handling\nI1005 10:49:53.057486 2728 log.go:181] (0x3082310) (5) Data frame handling\n+ nc -zv -t -w 2 10.102.77.251 80\nConnection to 10.102.77.251 80 port [tcp/http] succeeded!\nI1005 10:49:53.059994 2728 log.go:181] (0x3082150) (1) Data frame sent\nI1005 10:49:53.060133 2728 log.go:181] (0x3082310) (5) Data frame sent\nI1005 10:49:53.060986 2728 log.go:181] (0x30820e0) Data frame received for 5\nI1005 10:49:53.061106 2728 log.go:181] (0x3082310) (5) Data frame handling\nI1005 10:49:53.062387 2728 log.go:181] (0x30820e0) (0x3082150) Stream removed, broadcasting: 1\nI1005 10:49:53.062705 2728 log.go:181] (0x30820e0) Go away received\nI1005 10:49:53.066028 2728 log.go:181] (0x30820e0) (0x3082150) Stream removed, broadcasting: 1\nI1005 10:49:53.066293 2728 log.go:181] (0x30820e0) (0x25ca070) Stream removed, broadcasting: 3\nI1005 10:49:53.066469 2728 log.go:181] (0x30820e0) (0x3082310) Stream removed, broadcasting: 5\n" Oct 5 10:49:53.075: INFO: stdout: "" Oct 5 10:49:53.076: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpod-affinityzhjsn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32688' Oct 5 10:49:54.581: INFO: stderr: "I1005 10:49:54.454568 2748 log.go:181] (0x2618000) (0x2618070) Create stream\nI1005 10:49:54.457186 2748 log.go:181] (0x2618000) (0x2618070) Stream added, broadcasting: 1\nI1005 10:49:54.469732 2748 log.go:181] (0x2618000) Reply frame received for 1\nI1005 10:49:54.470826 2748 log.go:181] (0x2618000) (0x30b8070) Create stream\nI1005 10:49:54.470952 2748 log.go:181] (0x2618000) (0x30b8070) Stream added, broadcasting: 3\nI1005 10:49:54.473322 2748 log.go:181] (0x2618000) Reply frame received for 3\nI1005 10:49:54.473773 2748 log.go:181] (0x2618000) (0x30b8230) Create stream\nI1005 10:49:54.473895 2748 log.go:181] (0x2618000) (0x30b8230) Stream added, broadcasting: 5\nI1005 10:49:54.475561 2748 log.go:181] (0x2618000) Reply frame received for 5\nI1005 10:49:54.551741 2748 log.go:181] (0x2618000) Data frame received for 5\nI1005 10:49:54.552120 2748 log.go:181] (0x30b8230) (5) Data frame handling\nI1005 10:49:54.552623 2748 log.go:181] (0x2618000) Data frame received for 3\n+ nc -zv -t -w 2 172.18.0.12 32688\nConnection to 172.18.0.12 32688 port [tcp/32688] succeeded!\nI1005 10:49:54.553297 2748 log.go:181] (0x30b8230) (5) Data frame sent\nI1005 10:49:54.553521 2748 log.go:181] (0x30b8070) (3) Data frame handling\nI1005 10:49:54.554094 2748 log.go:181] (0x2618000) Data frame received for 5\nI1005 10:49:54.554193 2748 log.go:181] (0x30b8230) (5) Data frame handling\nI1005 10:49:54.555413 2748 log.go:181] (0x2618000) Data frame received for 1\nI1005 10:49:54.555603 2748 log.go:181] (0x2618070) (1) Data frame handling\nI1005 10:49:54.555780 2748 log.go:181] (0x2618070) (1) Data frame sent\nI1005 10:49:54.557746 2748 log.go:181] (0x2618000) (0x2618070) Stream removed, broadcasting: 1\nI1005 10:49:54.558498 2748 log.go:181] (0x2618000) Go away received\nI1005 10:49:54.572538 2748 log.go:181] (0x2618000) (0x2618070) Stream removed, broadcasting: 1\nI1005 10:49:54.573184 2748 log.go:181] (0x2618000) (0x30b8070) Stream removed, broadcasting: 3\nI1005 10:49:54.573438 2748 log.go:181] (0x2618000) (0x30b8230) Stream removed, broadcasting: 5\n" Oct 5 10:49:54.582: INFO: stdout: "" Oct 5 10:49:54.583: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpod-affinityzhjsn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32688' Oct 5 10:49:56.075: INFO: stderr: "I1005 10:49:55.958733 2768 log.go:181] (0x293e000) (0x293e070) Create stream\nI1005 10:49:55.961675 2768 log.go:181] (0x293e000) (0x293e070) Stream added, broadcasting: 1\nI1005 10:49:55.971822 2768 log.go:181] (0x293e000) Reply frame received for 1\nI1005 10:49:55.972220 2768 log.go:181] (0x293e000) (0x293e310) Create stream\nI1005 10:49:55.972280 2768 log.go:181] (0x293e000) (0x293e310) Stream added, broadcasting: 3\nI1005 10:49:55.973820 2768 log.go:181] (0x293e000) Reply frame received for 3\nI1005 10:49:55.974075 2768 log.go:181] (0x293e000) (0x28ee070) Create stream\nI1005 10:49:55.974139 2768 log.go:181] (0x293e000) (0x28ee070) Stream added, broadcasting: 5\nI1005 10:49:55.975682 2768 log.go:181] (0x293e000) Reply frame received for 5\nI1005 10:49:56.056147 2768 log.go:181] (0x293e000) Data frame received for 3\nI1005 10:49:56.056635 2768 log.go:181] (0x293e000) Data frame received for 5\nI1005 10:49:56.057056 2768 log.go:181] (0x293e000) Data frame received for 1\nI1005 10:49:56.057368 2768 log.go:181] (0x293e070) (1) Data frame handling\nI1005 10:49:56.057507 2768 log.go:181] (0x28ee070) (5) Data frame handling\nI1005 10:49:56.057787 2768 log.go:181] (0x293e310) (3) Data frame handling\nI1005 10:49:56.059582 2768 log.go:181] (0x28ee070) (5) Data frame sent\nI1005 10:49:56.059901 2768 log.go:181] (0x293e000) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.13 32688\nI1005 10:49:56.060068 2768 log.go:181] (0x28ee070) (5) Data frame handling\nI1005 10:49:56.060492 2768 log.go:181] (0x293e070) (1) Data frame sent\nI1005 10:49:56.062354 2768 log.go:181] (0x293e000) (0x293e070) Stream removed, broadcasting: 1\nI1005 10:49:56.062792 2768 log.go:181] (0x28ee070) (5) Data frame sent\nI1005 10:49:56.062891 2768 log.go:181] (0x293e000) Data frame received for 5\nI1005 10:49:56.062957 2768 log.go:181] (0x28ee070) (5) Data frame handling\nConnection to 172.18.0.13 32688 port [tcp/32688] succeeded!\nI1005 10:49:56.063745 2768 log.go:181] (0x293e000) Go away received\nI1005 10:49:56.066544 2768 log.go:181] (0x293e000) (0x293e070) Stream removed, broadcasting: 1\nI1005 10:49:56.066754 2768 log.go:181] (0x293e000) (0x293e310) Stream removed, broadcasting: 3\nI1005 10:49:56.066967 2768 log.go:181] (0x293e000) (0x28ee070) Stream removed, broadcasting: 5\n" Oct 5 10:49:56.077: INFO: stdout: "" Oct 5 10:49:56.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpod-affinityzhjsn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:32688/ ; done' Oct 5 10:49:57.655: INFO: stderr: "I1005 10:49:57.442179 2789 log.go:181] (0x277e000) (0x277e2a0) Create stream\nI1005 10:49:57.444106 2789 log.go:181] (0x277e000) (0x277e2a0) Stream added, broadcasting: 1\nI1005 10:49:57.463306 2789 log.go:181] (0x277e000) Reply frame received for 1\nI1005 10:49:57.463871 2789 log.go:181] (0x277e000) (0x277fab0) Create stream\nI1005 10:49:57.463967 2789 log.go:181] (0x277e000) (0x277fab0) Stream added, broadcasting: 3\nI1005 10:49:57.465609 2789 log.go:181] (0x277e000) Reply frame received for 3\nI1005 10:49:57.465839 2789 log.go:181] (0x277e000) (0x2ce43f0) Create stream\nI1005 10:49:57.465900 2789 log.go:181] (0x277e000) (0x2ce43f0) Stream added, broadcasting: 5\nI1005 10:49:57.467005 2789 log.go:181] (0x277e000) Reply frame received for 5\nI1005 10:49:57.543521 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.543904 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.544067 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.544268 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.544796 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.545009 2789 log.go:181] (0x277fab0) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.547382 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.547503 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.547656 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.548488 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.548636 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.548744 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.548952 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.549045 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.549162 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.552930 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.553032 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.553132 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.554102 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.554211 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.554337 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.554480 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.554597 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.554695 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.558786 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.558908 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.559022 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.559718 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.559875 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\n+ echo\nI1005 10:49:57.559979 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.560119 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.560259 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.560386 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.560516 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.560672 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.560809 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.563576 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.563677 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.563767 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.564108 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.564279 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeoutI1005 10:49:57.564398 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.564519 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.564647 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.564758 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.564931 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\n 2 http://172.18.0.12:32688/\nI1005 10:49:57.565051 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.565135 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.568548 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.568624 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.568717 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.569677 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.569769 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.569860 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.569938 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.570010 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.570100 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.575511 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.575598 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.575688 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.576132 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.576276 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.576386 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.576523 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.576749 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.576913 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.581622 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.581718 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.581865 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.582624 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.582726 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.582821 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.582911 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.583026 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.583145 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.586918 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.586992 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.587066 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.587419 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.587536 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.587667 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I1005 10:49:57.587793 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.587919 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.588050 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n http://172.18.0.12:32688/\nI1005 10:49:57.588179 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.588295 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.588432 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.592580 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.592694 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.592806 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.593760 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.593871 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.593965 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.594050 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.594130 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.594277 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.597897 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.598011 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.598122 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.598456 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.598548 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/I1005 10:49:57.598632 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.598754 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.598887 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.599061 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.599167 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.599278 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.599400 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n\nI1005 10:49:57.602797 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.602907 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.603061 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.603274 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.603354 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.603428 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.603495 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.603558 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.603661 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.609331 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.609508 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.609691 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.610223 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.610369 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.610513 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.610695 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.610781 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.610883 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.614403 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.614498 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.614588 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.614921 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.614994 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.615102 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.615240 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.615380 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.615512 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.620246 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.620371 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.620502 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.620921 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.621020 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.621098 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.621189 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.621257 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\nI1005 10:49:57.621347 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.625219 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.625352 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.625540 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.626191 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.626352 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.626482 2789 log.go:181] (0x2ce43f0) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:57.626600 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.626713 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.626865 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.632484 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.632606 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.632753 2789 log.go:181] (0x277fab0) (3) Data frame sent\nI1005 10:49:57.633371 2789 log.go:181] (0x277e000) Data frame received for 3\nI1005 10:49:57.633498 2789 log.go:181] (0x277fab0) (3) Data frame handling\nI1005 10:49:57.634057 2789 log.go:181] (0x277e000) Data frame received for 5\nI1005 10:49:57.634216 2789 log.go:181] (0x2ce43f0) (5) Data frame handling\nI1005 10:49:57.636124 2789 log.go:181] (0x277e000) Data frame received for 1\nI1005 10:49:57.636216 2789 log.go:181] (0x277e2a0) (1) Data frame handling\nI1005 10:49:57.636316 2789 log.go:181] (0x277e2a0) (1) Data frame sent\nI1005 10:49:57.637128 2789 log.go:181] (0x277e000) (0x277e2a0) Stream removed, broadcasting: 1\nI1005 10:49:57.640009 2789 log.go:181] (0x277e000) Go away received\nI1005 10:49:57.643162 2789 log.go:181] (0x277e000) (0x277e2a0) Stream removed, broadcasting: 1\nI1005 10:49:57.643449 2789 log.go:181] (0x277e000) (0x277fab0) Stream removed, broadcasting: 3\nI1005 10:49:57.643684 2789 log.go:181] (0x277e000) (0x2ce43f0) Stream removed, broadcasting: 5\n" Oct 5 10:49:57.661: INFO: stdout: "\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk\naffinity-nodeport-timeout-xvtjk" Oct 5 10:49:57.661: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.661: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.661: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.661: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.661: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.661: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Received response from host: affinity-nodeport-timeout-xvtjk Oct 5 10:49:57.662: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpod-affinityzhjsn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.12:32688/' Oct 5 10:49:59.203: INFO: stderr: "I1005 10:49:59.045972 2809 log.go:181] (0x2fb4310) (0x2fb4380) Create stream\nI1005 10:49:59.048019 2809 log.go:181] (0x2fb4310) (0x2fb4380) Stream added, broadcasting: 1\nI1005 10:49:59.098899 2809 log.go:181] (0x2fb4310) Reply frame received for 1\nI1005 10:49:59.099511 2809 log.go:181] (0x2fb4310) (0x2dfc000) Create stream\nI1005 10:49:59.099608 2809 log.go:181] (0x2fb4310) (0x2dfc000) Stream added, broadcasting: 3\nI1005 10:49:59.101300 2809 log.go:181] (0x2fb4310) Reply frame received for 3\nI1005 10:49:59.101599 2809 log.go:181] (0x2fb4310) (0x2fb4070) Create stream\nI1005 10:49:59.101677 2809 log.go:181] (0x2fb4310) (0x2fb4070) Stream added, broadcasting: 5\nI1005 10:49:59.102893 2809 log.go:181] (0x2fb4310) Reply frame received for 5\nI1005 10:49:59.179463 2809 log.go:181] (0x2fb4310) Data frame received for 5\nI1005 10:49:59.179862 2809 log.go:181] (0x2fb4070) (5) Data frame handling\nI1005 10:49:59.180723 2809 log.go:181] (0x2fb4070) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:49:59.184367 2809 log.go:181] (0x2fb4310) Data frame received for 3\nI1005 10:49:59.184441 2809 log.go:181] (0x2dfc000) (3) Data frame handling\nI1005 10:49:59.184572 2809 log.go:181] (0x2dfc000) (3) Data frame sent\nI1005 10:49:59.185290 2809 log.go:181] (0x2fb4310) Data frame received for 3\nI1005 10:49:59.185442 2809 log.go:181] (0x2dfc000) (3) Data frame handling\nI1005 10:49:59.186135 2809 log.go:181] (0x2fb4310) Data frame received for 5\nI1005 10:49:59.186304 2809 log.go:181] (0x2fb4070) (5) Data frame handling\nI1005 10:49:59.187677 2809 log.go:181] (0x2fb4310) Data frame received for 1\nI1005 10:49:59.187802 2809 log.go:181] (0x2fb4380) (1) Data frame handling\nI1005 10:49:59.187923 2809 log.go:181] (0x2fb4380) (1) Data frame sent\nI1005 10:49:59.188788 2809 log.go:181] (0x2fb4310) (0x2fb4380) Stream removed, broadcasting: 1\nI1005 10:49:59.191021 2809 log.go:181] (0x2fb4310) Go away received\nI1005 10:49:59.193902 2809 log.go:181] (0x2fb4310) (0x2fb4380) Stream removed, broadcasting: 1\nI1005 10:49:59.194087 2809 log.go:181] (0x2fb4310) (0x2dfc000) Stream removed, broadcasting: 3\nI1005 10:49:59.194223 2809 log.go:181] (0x2fb4310) (0x2fb4070) Stream removed, broadcasting: 5\n" Oct 5 10:49:59.204: INFO: stdout: "affinity-nodeport-timeout-xvtjk" Oct 5 10:50:14.205: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4044 execpod-affinityzhjsn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.12:32688/' Oct 5 10:50:15.708: INFO: stderr: "I1005 10:50:15.589622 2829 log.go:181] (0x27b4770) (0x27b4d90) Create stream\nI1005 10:50:15.591957 2829 log.go:181] (0x27b4770) (0x27b4d90) Stream added, broadcasting: 1\nI1005 10:50:15.610785 2829 log.go:181] (0x27b4770) Reply frame received for 1\nI1005 10:50:15.611225 2829 log.go:181] (0x27b4770) (0x29de070) Create stream\nI1005 10:50:15.611288 2829 log.go:181] (0x27b4770) (0x29de070) Stream added, broadcasting: 3\nI1005 10:50:15.612357 2829 log.go:181] (0x27b4770) Reply frame received for 3\nI1005 10:50:15.612587 2829 log.go:181] (0x27b4770) (0x2baa230) Create stream\nI1005 10:50:15.612647 2829 log.go:181] (0x27b4770) (0x2baa230) Stream added, broadcasting: 5\nI1005 10:50:15.613915 2829 log.go:181] (0x27b4770) Reply frame received for 5\nI1005 10:50:15.692062 2829 log.go:181] (0x27b4770) Data frame received for 5\nI1005 10:50:15.692271 2829 log.go:181] (0x2baa230) (5) Data frame handling\nI1005 10:50:15.692727 2829 log.go:181] (0x2baa230) (5) Data frame sent\nI1005 10:50:15.693043 2829 log.go:181] (0x27b4770) Data frame received for 3\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32688/\nI1005 10:50:15.693172 2829 log.go:181] (0x29de070) (3) Data frame handling\nI1005 10:50:15.693249 2829 log.go:181] (0x29de070) (3) Data frame sent\nI1005 10:50:15.693397 2829 log.go:181] (0x27b4770) Data frame received for 3\nI1005 10:50:15.693453 2829 log.go:181] (0x29de070) (3) Data frame handling\nI1005 10:50:15.693818 2829 log.go:181] (0x27b4770) Data frame received for 5\nI1005 10:50:15.693915 2829 log.go:181] (0x2baa230) (5) Data frame handling\nI1005 10:50:15.694972 2829 log.go:181] (0x27b4770) Data frame received for 1\nI1005 10:50:15.695059 2829 log.go:181] (0x27b4d90) (1) Data frame handling\nI1005 10:50:15.695133 2829 log.go:181] (0x27b4d90) (1) Data frame sent\nI1005 10:50:15.695532 2829 log.go:181] (0x27b4770) (0x27b4d90) Stream removed, broadcasting: 1\nI1005 10:50:15.697490 2829 log.go:181] (0x27b4770) Go away received\nI1005 10:50:15.699808 2829 log.go:181] (0x27b4770) (0x27b4d90) Stream removed, broadcasting: 1\nI1005 10:50:15.700272 2829 log.go:181] (0x27b4770) (0x29de070) Stream removed, broadcasting: 3\nI1005 10:50:15.700494 2829 log.go:181] (0x27b4770) (0x2baa230) Stream removed, broadcasting: 5\n" Oct 5 10:50:15.709: INFO: stdout: "affinity-nodeport-timeout-6f7b9" Oct 5 10:50:15.709: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-4044, will wait for the garbage collector to delete the pods Oct 5 10:50:16.081: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 271.74393ms Oct 5 10:50:16.682: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 600.945349ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:50:28.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4044" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:65.722 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":153,"skipped":2428,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:50:28.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:50:32.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7973" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":303,"completed":154,"skipped":2434,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:50:32.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-6d7d16fb-f26d-4e93-9c5c-839bf8e440a8 STEP: Creating a pod to test consume secrets Oct 5 10:50:33.093: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-371eefd7-eea6-4a5a-83de-f69003a035a3" in namespace "projected-6232" to be "Succeeded or Failed" Oct 5 10:50:33.119: INFO: Pod "pod-projected-secrets-371eefd7-eea6-4a5a-83de-f69003a035a3": Phase="Pending", Reason="", readiness=false. Elapsed: 26.580382ms Oct 5 10:50:35.156: INFO: Pod "pod-projected-secrets-371eefd7-eea6-4a5a-83de-f69003a035a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06333218s Oct 5 10:50:37.164: INFO: Pod "pod-projected-secrets-371eefd7-eea6-4a5a-83de-f69003a035a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071428577s STEP: Saw pod success Oct 5 10:50:37.165: INFO: Pod "pod-projected-secrets-371eefd7-eea6-4a5a-83de-f69003a035a3" satisfied condition "Succeeded or Failed" Oct 5 10:50:37.170: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-371eefd7-eea6-4a5a-83de-f69003a035a3 container projected-secret-volume-test: STEP: delete the pod Oct 5 10:50:37.208: INFO: Waiting for pod pod-projected-secrets-371eefd7-eea6-4a5a-83de-f69003a035a3 to disappear Oct 5 10:50:37.256: INFO: Pod pod-projected-secrets-371eefd7-eea6-4a5a-83de-f69003a035a3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:50:37.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6232" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":155,"skipped":2458,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:50:37.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:50:37.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-24f2d001-a920-4c99-8622-489524e6c03f" in namespace "downward-api-1271" to be "Succeeded or Failed" Oct 5 10:50:37.410: INFO: Pod "downwardapi-volume-24f2d001-a920-4c99-8622-489524e6c03f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.71778ms Oct 5 10:50:39.436: INFO: Pod "downwardapi-volume-24f2d001-a920-4c99-8622-489524e6c03f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057236007s Oct 5 10:50:41.443: INFO: Pod "downwardapi-volume-24f2d001-a920-4c99-8622-489524e6c03f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064072307s STEP: Saw pod success Oct 5 10:50:41.443: INFO: Pod "downwardapi-volume-24f2d001-a920-4c99-8622-489524e6c03f" satisfied condition "Succeeded or Failed" Oct 5 10:50:41.448: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-24f2d001-a920-4c99-8622-489524e6c03f container client-container: STEP: delete the pod Oct 5 10:50:41.483: INFO: Waiting for pod downwardapi-volume-24f2d001-a920-4c99-8622-489524e6c03f to disappear Oct 5 10:50:41.492: INFO: Pod downwardapi-volume-24f2d001-a920-4c99-8622-489524e6c03f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:50:41.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1271" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":156,"skipped":2472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:50:41.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-configmap-gl2c STEP: Creating a pod to test atomic-volume-subpath Oct 5 10:50:41.677: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gl2c" in namespace "subpath-2606" to be "Succeeded or Failed" Oct 5 10:50:41.682: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.250874ms Oct 5 10:50:43.688: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010821891s Oct 5 10:50:45.694: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 4.016914488s Oct 5 10:50:47.703: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 6.025913487s Oct 5 10:50:49.711: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 8.034197737s Oct 5 10:50:51.718: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 10.041435881s Oct 5 10:50:53.725: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 12.04842857s Oct 5 10:50:55.754: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 14.076846543s Oct 5 10:50:57.761: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 16.083802206s Oct 5 10:50:59.783: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 18.105987354s Oct 5 10:51:01.790: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 20.113325834s Oct 5 10:51:03.802: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Running", Reason="", readiness=true. Elapsed: 22.124804971s Oct 5 10:51:05.810: INFO: Pod "pod-subpath-test-configmap-gl2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.133636698s STEP: Saw pod success Oct 5 10:51:05.811: INFO: Pod "pod-subpath-test-configmap-gl2c" satisfied condition "Succeeded or Failed" Oct 5 10:51:05.820: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-gl2c container test-container-subpath-configmap-gl2c: STEP: delete the pod Oct 5 10:51:06.053: INFO: Waiting for pod pod-subpath-test-configmap-gl2c to disappear Oct 5 10:51:06.190: INFO: Pod pod-subpath-test-configmap-gl2c no longer exists STEP: Deleting pod pod-subpath-test-configmap-gl2c Oct 5 10:51:06.190: INFO: Deleting pod "pod-subpath-test-configmap-gl2c" in namespace "subpath-2606" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:51:06.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2606" for this suite. • [SLOW TEST:24.694 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":303,"completed":157,"skipped":2550,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:51:06.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 5 10:51:06.282: INFO: Waiting up to 5m0s for pod "pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617" in namespace "emptydir-974" to be "Succeeded or Failed" Oct 5 10:51:06.326: INFO: Pod "pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617": Phase="Pending", Reason="", readiness=false. Elapsed: 44.309452ms Oct 5 10:51:08.432: INFO: Pod "pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150039672s Oct 5 10:51:10.438: INFO: Pod "pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617": Phase="Running", Reason="", readiness=true. Elapsed: 4.155820966s Oct 5 10:51:12.444: INFO: Pod "pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.162090789s STEP: Saw pod success Oct 5 10:51:12.444: INFO: Pod "pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617" satisfied condition "Succeeded or Failed" Oct 5 10:51:12.448: INFO: Trying to get logs from node kali-worker pod pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617 container test-container: STEP: delete the pod Oct 5 10:51:12.488: INFO: Waiting for pod pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617 to disappear Oct 5 10:51:12.497: INFO: Pod pod-68a74bd6-d11b-400c-bd0b-9a63eb53a617 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:51:12.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-974" for this suite. • [SLOW TEST:6.320 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":158,"skipped":2557,"failed":0} [k8s.io] Pods should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:51:12.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should delete a collection of pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of pods Oct 5 10:51:12.595: INFO: created test-pod-1 Oct 5 10:51:12.613: INFO: created test-pod-2 Oct 5 10:51:12.659: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:51:12.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8961" for this suite. •{"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":303,"completed":159,"skipped":2557,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:51:12.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Oct 5 10:51:12.997: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the sample API server. Oct 5 10:51:20.385: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Oct 5 10:51:22.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491880, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491880, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491880, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491880, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 10:51:24.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491880, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491880, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491880, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737491880, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 10:51:27.726: INFO: Waited 730.454306ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:51:28.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3858" for this suite. • [SLOW TEST:15.536 seconds] [sig-api-machinery] Aggregator /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":303,"completed":160,"skipped":2564,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:51:28.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:51:29.116: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-84de8643-9c9b-4385-a384-c8b9a21bfa27" in namespace "security-context-test-4382" to be "Succeeded or Failed" Oct 5 10:51:29.133: INFO: Pod "alpine-nnp-false-84de8643-9c9b-4385-a384-c8b9a21bfa27": Phase="Pending", Reason="", readiness=false. Elapsed: 16.873705ms Oct 5 10:51:31.151: INFO: Pod "alpine-nnp-false-84de8643-9c9b-4385-a384-c8b9a21bfa27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034627499s Oct 5 10:51:33.158: INFO: Pod "alpine-nnp-false-84de8643-9c9b-4385-a384-c8b9a21bfa27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041449127s Oct 5 10:51:33.158: INFO: Pod "alpine-nnp-false-84de8643-9c9b-4385-a384-c8b9a21bfa27" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:51:33.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4382" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":161,"skipped":2575,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:51:33.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-2699 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 5 10:51:33.594: INFO: Found 0 stateful pods, waiting for 3 Oct 5 10:51:43.712: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 10:51:43.712: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 10:51:43.712: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 5 10:51:53.606: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 10:51:53.607: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 10:51:53.607: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Oct 5 10:51:53.632: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2699 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 10:51:58.352: INFO: stderr: "I1005 10:51:58.180313 2849 log.go:181] (0x247a8c0) (0x247aa80) Create stream\nI1005 10:51:58.185211 2849 log.go:181] (0x247a8c0) (0x247aa80) Stream added, broadcasting: 1\nI1005 10:51:58.196939 2849 log.go:181] (0x247a8c0) Reply frame received for 1\nI1005 10:51:58.198074 2849 log.go:181] (0x247a8c0) (0x2882230) Create stream\nI1005 10:51:58.198270 2849 log.go:181] (0x247a8c0) (0x2882230) Stream added, broadcasting: 3\nI1005 10:51:58.200641 2849 log.go:181] (0x247a8c0) Reply frame received for 3\nI1005 10:51:58.201300 2849 log.go:181] (0x247a8c0) (0x2882460) Create stream\nI1005 10:51:58.201458 2849 log.go:181] (0x247a8c0) (0x2882460) Stream added, broadcasting: 5\nI1005 10:51:58.203420 2849 log.go:181] (0x247a8c0) Reply frame received for 5\nI1005 10:51:58.302333 2849 log.go:181] (0x247a8c0) Data frame received for 5\nI1005 10:51:58.302518 2849 log.go:181] (0x2882460) (5) Data frame handling\nI1005 10:51:58.302863 2849 log.go:181] (0x2882460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 10:51:58.332400 2849 log.go:181] (0x247a8c0) Data frame received for 3\nI1005 10:51:58.332625 2849 log.go:181] (0x2882230) (3) Data frame handling\nI1005 10:51:58.332775 2849 log.go:181] (0x247a8c0) Data frame received for 5\nI1005 10:51:58.333077 2849 log.go:181] (0x2882460) (5) Data frame handling\nI1005 10:51:58.333316 2849 log.go:181] (0x2882230) (3) Data frame sent\nI1005 10:51:58.333517 2849 log.go:181] (0x247a8c0) Data frame received for 3\nI1005 10:51:58.333678 2849 log.go:181] (0x2882230) (3) Data frame handling\nI1005 10:51:58.334347 2849 log.go:181] (0x247a8c0) Data frame received for 1\nI1005 10:51:58.334513 2849 log.go:181] (0x247aa80) (1) Data frame handling\nI1005 10:51:58.334739 2849 log.go:181] (0x247aa80) (1) Data frame sent\nI1005 10:51:58.335263 2849 log.go:181] (0x247a8c0) (0x247aa80) Stream removed, broadcasting: 1\nI1005 10:51:58.337989 2849 log.go:181] (0x247a8c0) Go away received\nI1005 10:51:58.341254 2849 log.go:181] (0x247a8c0) (0x247aa80) Stream removed, broadcasting: 1\nI1005 10:51:58.341434 2849 log.go:181] (0x247a8c0) (0x2882230) Stream removed, broadcasting: 3\nI1005 10:51:58.341591 2849 log.go:181] (0x247a8c0) (0x2882460) Stream removed, broadcasting: 5\n" Oct 5 10:51:58.352: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 10:51:58.353: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 5 10:52:08.406: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Oct 5 10:52:18.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2699 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:52:19.987: INFO: stderr: "I1005 10:52:19.880401 2870 log.go:181] (0x2d2c000) (0x2d2c070) Create stream\nI1005 10:52:19.882763 2870 log.go:181] (0x2d2c000) (0x2d2c070) Stream added, broadcasting: 1\nI1005 10:52:19.902280 2870 log.go:181] (0x2d2c000) Reply frame received for 1\nI1005 10:52:19.902785 2870 log.go:181] (0x2d2c000) (0x2c7c460) Create stream\nI1005 10:52:19.902855 2870 log.go:181] (0x2d2c000) (0x2c7c460) Stream added, broadcasting: 3\nI1005 10:52:19.904216 2870 log.go:181] (0x2d2c000) Reply frame received for 3\nI1005 10:52:19.904582 2870 log.go:181] (0x2d2c000) (0x27c4070) Create stream\nI1005 10:52:19.904697 2870 log.go:181] (0x2d2c000) (0x27c4070) Stream added, broadcasting: 5\nI1005 10:52:19.906034 2870 log.go:181] (0x2d2c000) Reply frame received for 5\nI1005 10:52:19.967517 2870 log.go:181] (0x2d2c000) Data frame received for 3\nI1005 10:52:19.967771 2870 log.go:181] (0x2d2c000) Data frame received for 1\nI1005 10:52:19.967957 2870 log.go:181] (0x2d2c000) Data frame received for 5\nI1005 10:52:19.968192 2870 log.go:181] (0x27c4070) (5) Data frame handling\nI1005 10:52:19.968405 2870 log.go:181] (0x2c7c460) (3) Data frame handling\nI1005 10:52:19.968575 2870 log.go:181] (0x2d2c070) (1) Data frame handling\nI1005 10:52:19.969157 2870 log.go:181] (0x2c7c460) (3) Data frame sent\nI1005 10:52:19.969329 2870 log.go:181] (0x2d2c070) (1) Data frame sent\nI1005 10:52:19.969501 2870 log.go:181] (0x27c4070) (5) Data frame sent\nI1005 10:52:19.969734 2870 log.go:181] (0x2d2c000) Data frame received for 3\nI1005 10:52:19.969793 2870 log.go:181] (0x2c7c460) (3) Data frame handling\nI1005 10:52:19.970200 2870 log.go:181] (0x2d2c000) Data frame received for 5\nI1005 10:52:19.970307 2870 log.go:181] (0x27c4070) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 10:52:19.971597 2870 log.go:181] (0x2d2c000) (0x2d2c070) Stream removed, broadcasting: 1\nI1005 10:52:19.975028 2870 log.go:181] (0x2d2c000) Go away received\nI1005 10:52:19.977765 2870 log.go:181] (0x2d2c000) (0x2d2c070) Stream removed, broadcasting: 1\nI1005 10:52:19.978046 2870 log.go:181] (0x2d2c000) (0x2c7c460) Stream removed, broadcasting: 3\nI1005 10:52:19.978268 2870 log.go:181] (0x2d2c000) (0x27c4070) Stream removed, broadcasting: 5\n" Oct 5 10:52:19.988: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 10:52:19.988: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' STEP: Rolling back to a previous revision Oct 5 10:52:40.029: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2699 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Oct 5 10:52:41.611: INFO: stderr: "I1005 10:52:41.427302 2890 log.go:181] (0x2aa24d0) (0x2aa2540) Create stream\nI1005 10:52:41.429081 2890 log.go:181] (0x2aa24d0) (0x2aa2540) Stream added, broadcasting: 1\nI1005 10:52:41.437083 2890 log.go:181] (0x2aa24d0) Reply frame received for 1\nI1005 10:52:41.437591 2890 log.go:181] (0x2aa24d0) (0x267f730) Create stream\nI1005 10:52:41.437662 2890 log.go:181] (0x2aa24d0) (0x267f730) Stream added, broadcasting: 3\nI1005 10:52:41.439035 2890 log.go:181] (0x2aa24d0) Reply frame received for 3\nI1005 10:52:41.439245 2890 log.go:181] (0x2aa24d0) (0x2aa2700) Create stream\nI1005 10:52:41.439301 2890 log.go:181] (0x2aa24d0) (0x2aa2700) Stream added, broadcasting: 5\nI1005 10:52:41.440483 2890 log.go:181] (0x2aa24d0) Reply frame received for 5\nI1005 10:52:41.530935 2890 log.go:181] (0x2aa24d0) Data frame received for 5\nI1005 10:52:41.531377 2890 log.go:181] (0x2aa2700) (5) Data frame handling\nI1005 10:52:41.532363 2890 log.go:181] (0x2aa2700) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI1005 10:52:41.592753 2890 log.go:181] (0x2aa24d0) Data frame received for 3\nI1005 10:52:41.593116 2890 log.go:181] (0x267f730) (3) Data frame handling\nI1005 10:52:41.593263 2890 log.go:181] (0x2aa24d0) Data frame received for 5\nI1005 10:52:41.593403 2890 log.go:181] (0x2aa2700) (5) Data frame handling\nI1005 10:52:41.593510 2890 log.go:181] (0x267f730) (3) Data frame sent\nI1005 10:52:41.593629 2890 log.go:181] (0x2aa24d0) Data frame received for 3\nI1005 10:52:41.593708 2890 log.go:181] (0x267f730) (3) Data frame handling\nI1005 10:52:41.594520 2890 log.go:181] (0x2aa24d0) Data frame received for 1\nI1005 10:52:41.594598 2890 log.go:181] (0x2aa2540) (1) Data frame handling\nI1005 10:52:41.594698 2890 log.go:181] (0x2aa2540) (1) Data frame sent\nI1005 10:52:41.595945 2890 log.go:181] (0x2aa24d0) (0x2aa2540) Stream removed, broadcasting: 1\nI1005 10:52:41.597961 2890 log.go:181] (0x2aa24d0) Go away received\nI1005 10:52:41.601570 2890 log.go:181] (0x2aa24d0) (0x2aa2540) Stream removed, broadcasting: 1\nI1005 10:52:41.601754 2890 log.go:181] (0x2aa24d0) (0x267f730) Stream removed, broadcasting: 3\nI1005 10:52:41.601900 2890 log.go:181] (0x2aa24d0) (0x2aa2700) Stream removed, broadcasting: 5\n" Oct 5 10:52:41.613: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Oct 5 10:52:41.613: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Oct 5 10:52:51.681: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Oct 5 10:53:01.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=statefulset-2699 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Oct 5 10:53:03.259: INFO: stderr: "I1005 10:53:03.129144 2910 log.go:181] (0x2d2cd90) (0x2d2ce00) Create stream\nI1005 10:53:03.133383 2910 log.go:181] (0x2d2cd90) (0x2d2ce00) Stream added, broadcasting: 1\nI1005 10:53:03.151109 2910 log.go:181] (0x2d2cd90) Reply frame received for 1\nI1005 10:53:03.152427 2910 log.go:181] (0x2d2cd90) (0x25100e0) Create stream\nI1005 10:53:03.152615 2910 log.go:181] (0x2d2cd90) (0x25100e0) Stream added, broadcasting: 3\nI1005 10:53:03.157422 2910 log.go:181] (0x2d2cd90) Reply frame received for 3\nI1005 10:53:03.157756 2910 log.go:181] (0x2d2cd90) (0x25ee070) Create stream\nI1005 10:53:03.157832 2910 log.go:181] (0x2d2cd90) (0x25ee070) Stream added, broadcasting: 5\nI1005 10:53:03.158907 2910 log.go:181] (0x2d2cd90) Reply frame received for 5\nI1005 10:53:03.238583 2910 log.go:181] (0x2d2cd90) Data frame received for 3\nI1005 10:53:03.238997 2910 log.go:181] (0x2d2cd90) Data frame received for 5\nI1005 10:53:03.239171 2910 log.go:181] (0x25ee070) (5) Data frame handling\nI1005 10:53:03.239283 2910 log.go:181] (0x2d2cd90) Data frame received for 1\nI1005 10:53:03.239459 2910 log.go:181] (0x2d2ce00) (1) Data frame handling\nI1005 10:53:03.239749 2910 log.go:181] (0x25100e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI1005 10:53:03.241440 2910 log.go:181] (0x25100e0) (3) Data frame sent\nI1005 10:53:03.241554 2910 log.go:181] (0x25ee070) (5) Data frame sent\nI1005 10:53:03.241760 2910 log.go:181] (0x2d2ce00) (1) Data frame sent\nI1005 10:53:03.241998 2910 log.go:181] (0x2d2cd90) Data frame received for 3\nI1005 10:53:03.242134 2910 log.go:181] (0x25100e0) (3) Data frame handling\nI1005 10:53:03.243037 2910 log.go:181] (0x2d2cd90) Data frame received for 5\nI1005 10:53:03.243138 2910 log.go:181] (0x2d2cd90) (0x2d2ce00) Stream removed, broadcasting: 1\nI1005 10:53:03.244228 2910 log.go:181] (0x25ee070) (5) Data frame handling\nI1005 10:53:03.245443 2910 log.go:181] (0x2d2cd90) Go away received\nI1005 10:53:03.248281 2910 log.go:181] (0x2d2cd90) (0x2d2ce00) Stream removed, broadcasting: 1\nI1005 10:53:03.248959 2910 log.go:181] (0x2d2cd90) (0x25100e0) Stream removed, broadcasting: 3\nI1005 10:53:03.249272 2910 log.go:181] (0x2d2cd90) (0x25ee070) Stream removed, broadcasting: 5\n" Oct 5 10:53:03.260: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Oct 5 10:53:03.260: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Oct 5 10:53:13.301: INFO: Waiting for StatefulSet statefulset-2699/ss2 to complete update Oct 5 10:53:13.302: INFO: Waiting for Pod statefulset-2699/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 5 10:53:13.302: INFO: Waiting for Pod statefulset-2699/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Oct 5 10:53:23.420: INFO: Waiting for StatefulSet statefulset-2699/ss2 to complete update Oct 5 10:53:23.420: INFO: Waiting for Pod statefulset-2699/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 10:53:33.318: INFO: Deleting all statefulset in ns statefulset-2699 Oct 5 10:53:33.323: INFO: Scaling statefulset ss2 to 0 Oct 5 10:54:03.395: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 10:54:03.400: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:54:03.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2699" for this suite. • [SLOW TEST:150.188 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":303,"completed":162,"skipped":2588,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:54:03.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on node default medium Oct 5 10:54:03.561: INFO: Waiting up to 5m0s for pod "pod-b31335a5-b1d1-4cf5-a690-efa5add434ca" in namespace "emptydir-5744" to be "Succeeded or Failed" Oct 5 10:54:03.655: INFO: Pod "pod-b31335a5-b1d1-4cf5-a690-efa5add434ca": Phase="Pending", Reason="", readiness=false. Elapsed: 93.208495ms Oct 5 10:54:05.661: INFO: Pod "pod-b31335a5-b1d1-4cf5-a690-efa5add434ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099680454s Oct 5 10:54:07.670: INFO: Pod "pod-b31335a5-b1d1-4cf5-a690-efa5add434ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.107908596s STEP: Saw pod success Oct 5 10:54:07.670: INFO: Pod "pod-b31335a5-b1d1-4cf5-a690-efa5add434ca" satisfied condition "Succeeded or Failed" Oct 5 10:54:07.675: INFO: Trying to get logs from node kali-worker2 pod pod-b31335a5-b1d1-4cf5-a690-efa5add434ca container test-container: STEP: delete the pod Oct 5 10:54:07.713: INFO: Waiting for pod pod-b31335a5-b1d1-4cf5-a690-efa5add434ca to disappear Oct 5 10:54:07.718: INFO: Pod pod-b31335a5-b1d1-4cf5-a690-efa5add434ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:54:07.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5744" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":163,"skipped":2597,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:54:07.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: running the image docker.io/library/httpd:2.4.38-alpine Oct 5 10:54:07.793: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-6209' Oct 5 10:54:09.105: INFO: stderr: "" Oct 5 10:54:09.105: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run Oct 5 10:54:09.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod -o json --namespace=kubectl-6209' Oct 5 10:54:10.314: INFO: stderr: "" Oct 5 10:54:10.315: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-10-05T10:54:09Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-05T10:54:08Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:message\": {},\n \"f:reason\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-10-05T10:54:09Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6209\",\n \"resourceVersion\": \"3170124\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-6209/pods/e2e-test-httpd-pod\",\n \"uid\": \"1edc96da-c913-4dc4-b813-8458b4feed43\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-hjm27\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-hjm27\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-hjm27\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T10:54:09Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T10:54:09Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T10:54:09Z\",\n \"message\": \"containers with unready status: [e2e-test-httpd-pod]\",\n \"reason\": \"ContainersNotReady\",\n \"status\": \"False\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-10-05T10:54:09Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": false,\n \"restartCount\": 0,\n \"started\": false,\n \"state\": {\n \"waiting\": {\n \"reason\": \"ContainerCreating\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.13\",\n \"phase\": \"Pending\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-10-05T10:54:09Z\"\n }\n}\n" Oct 5 10:54:10.318: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config replace -f - --dry-run server --namespace=kubectl-6209' Oct 5 10:54:12.725: INFO: stderr: "W1005 10:54:11.162748 2970 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.\n" Oct 5 10:54:12.725: INFO: stdout: "pod/e2e-test-httpd-pod replaced (dry run)\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Oct 5 10:54:12.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6209' Oct 5 10:54:28.117: INFO: stderr: "" Oct 5 10:54:28.118: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:54:28.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6209" for this suite. • [SLOW TEST:20.388 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:919 should check if kubectl can dry-run update Pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":303,"completed":164,"skipped":2609,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:54:28.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on node default medium Oct 5 10:54:28.272: INFO: Waiting up to 5m0s for pod "pod-220465cc-a9ab-4171-881d-fc86f6290844" in namespace "emptydir-269" to be "Succeeded or Failed" Oct 5 10:54:28.287: INFO: Pod "pod-220465cc-a9ab-4171-881d-fc86f6290844": Phase="Pending", Reason="", readiness=false. Elapsed: 14.221567ms Oct 5 10:54:30.295: INFO: Pod "pod-220465cc-a9ab-4171-881d-fc86f6290844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02244053s Oct 5 10:54:32.301: INFO: Pod "pod-220465cc-a9ab-4171-881d-fc86f6290844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028812003s STEP: Saw pod success Oct 5 10:54:32.302: INFO: Pod "pod-220465cc-a9ab-4171-881d-fc86f6290844" satisfied condition "Succeeded or Failed" Oct 5 10:54:32.306: INFO: Trying to get logs from node kali-worker2 pod pod-220465cc-a9ab-4171-881d-fc86f6290844 container test-container: STEP: delete the pod Oct 5 10:54:32.325: INFO: Waiting for pod pod-220465cc-a9ab-4171-881d-fc86f6290844 to disappear Oct 5 10:54:32.335: INFO: Pod pod-220465cc-a9ab-4171-881d-fc86f6290844 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:54:32.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-269" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":165,"skipped":2630,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:54:32.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-1f4d1e8e-ab65-40fc-9881-1f370fa2c0f6 STEP: Creating a pod to test consume secrets Oct 5 10:54:32.626: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b" in namespace "projected-1644" to be "Succeeded or Failed" Oct 5 10:54:32.636: INFO: Pod "pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.85867ms Oct 5 10:54:34.692: INFO: Pod "pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065528281s Oct 5 10:54:36.704: INFO: Pod "pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b": Phase="Running", Reason="", readiness=true. Elapsed: 4.077902373s Oct 5 10:54:38.712: INFO: Pod "pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085251373s STEP: Saw pod success Oct 5 10:54:38.712: INFO: Pod "pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b" satisfied condition "Succeeded or Failed" Oct 5 10:54:38.717: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b container projected-secret-volume-test: STEP: delete the pod Oct 5 10:54:38.751: INFO: Waiting for pod pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b to disappear Oct 5 10:54:38.763: INFO: Pod pod-projected-secrets-78b59838-ecec-448e-b7ab-037b2a8e514b no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:54:38.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1644" for this suite. • [SLOW TEST:6.347 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":166,"skipped":2643,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:54:38.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 10:54:38.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76318dc7-0838-4235-860c-f9050c0e5581" in namespace "projected-6560" to be "Succeeded or Failed" Oct 5 10:54:38.904: INFO: Pod "downwardapi-volume-76318dc7-0838-4235-860c-f9050c0e5581": Phase="Pending", Reason="", readiness=false. Elapsed: 19.027401ms Oct 5 10:54:40.939: INFO: Pod "downwardapi-volume-76318dc7-0838-4235-860c-f9050c0e5581": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053983942s Oct 5 10:54:42.947: INFO: Pod "downwardapi-volume-76318dc7-0838-4235-860c-f9050c0e5581": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.062454593s STEP: Saw pod success Oct 5 10:54:42.948: INFO: Pod "downwardapi-volume-76318dc7-0838-4235-860c-f9050c0e5581" satisfied condition "Succeeded or Failed" Oct 5 10:54:42.953: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-76318dc7-0838-4235-860c-f9050c0e5581 container client-container: STEP: delete the pod Oct 5 10:54:42.977: INFO: Waiting for pod downwardapi-volume-76318dc7-0838-4235-860c-f9050c0e5581 to disappear Oct 5 10:54:42.998: INFO: Pod downwardapi-volume-76318dc7-0838-4235-860c-f9050c0e5581 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:54:42.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6560" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":303,"completed":167,"skipped":2648,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:54:43.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:54:43.096: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Oct 5 10:54:43.107: INFO: Pod name sample-pod: Found 0 pods out of 1 Oct 5 10:54:48.115: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 5 10:54:48.115: INFO: Creating deployment "test-rolling-update-deployment" Oct 5 10:54:48.123: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Oct 5 10:54:48.150: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Oct 5 10:54:50.294: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Oct 5 10:54:50.299: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492088, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492088, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492088, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492088, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-c4cb8d6d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 10:54:52.308: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 10:54:52.326: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1276 /apis/apps/v1/namespaces/deployment-1276/deployments/test-rolling-update-deployment 6d7baed7-881c-472c-ac94-4a79ad4d34c4 3170420 1 2020-10-05 10:54:48 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-10-05 10:54:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 10:54:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9db2d58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-05 10:54:48 +0000 UTC,LastTransitionTime:2020-10-05 10:54:48 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" has successfully progressed.,LastUpdateTime:2020-10-05 10:54:51 +0000 UTC,LastTransitionTime:2020-10-05 10:54:48 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 5 10:54:52.334: INFO: New ReplicaSet "test-rolling-update-deployment-c4cb8d6d9" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9 deployment-1276 /apis/apps/v1/namespaces/deployment-1276/replicasets/test-rolling-update-deployment-c4cb8d6d9 79197edb-39e4-4014-a1b7-3c4e5dd7e70d 3170409 1 2020-10-05 10:54:48 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 6d7baed7-881c-472c-ac94-4a79ad4d34c4 0x9db3290 0x9db3291}] [] [{kube-controller-manager Update apps/v1 2020-10-05 10:54:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d7baed7-881c-472c-ac94-4a79ad4d34c4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: c4cb8d6d9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9db3308 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 5 10:54:52.334: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Oct 5 10:54:52.335: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1276 /apis/apps/v1/namespaces/deployment-1276/replicasets/test-rolling-update-controller 33914ace-349d-4d0a-a558-586e7ab8f54b 3170419 2 2020-10-05 10:54:43 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 6d7baed7-881c-472c-ac94-4a79ad4d34c4 0x9db3187 0x9db3188}] [] [{e2e.test Update apps/v1 2020-10-05 10:54:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 10:54:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6d7baed7-881c-472c-ac94-4a79ad4d34c4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x9db3228 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 10:54:52.342: INFO: Pod "test-rolling-update-deployment-c4cb8d6d9-pbhrz" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-c4cb8d6d9-pbhrz test-rolling-update-deployment-c4cb8d6d9- deployment-1276 /api/v1/namespaces/deployment-1276/pods/test-rolling-update-deployment-c4cb8d6d9-pbhrz 91167ee0-cd7e-4a7e-8188-378eb1917296 3170408 0 2020-10-05 10:54:48 +0000 UTC map[name:sample-pod pod-template-hash:c4cb8d6d9] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-c4cb8d6d9 79197edb-39e4-4014-a1b7-3c4e5dd7e70d 0x9c53550 0x9c53551}] [] [{kube-controller-manager Update v1 2020-10-05 10:54:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"79197edb-39e4-4014-a1b7-3c4e5dd7e70d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:54:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.99\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lt8ss,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lt8ss,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lt8ss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:54:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:54:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:54:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:54:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.99,StartTime:2020-10-05 10:54:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 10:54:50 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://04853fcd2979f1616d8bed50731ac4683203bfeaa348cf0aa033d97588d510c5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.99,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:54:52.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1276" for this suite. • [SLOW TEST:9.324 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":168,"skipped":2665,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:54:52.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:55:01.220: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:55:03.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492101, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492101, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492101, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492101, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:55:06.278: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:55:06.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4369" for this suite. STEP: Destroying namespace "webhook-4369-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.102 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":303,"completed":169,"skipped":2684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:55:06.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 5 10:55:11.129: INFO: Successfully updated pod "annotationupdate530bef68-4b86-4c45-8c07-5f68954cacb7" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:55:15.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2808" for this suite. • [SLOW TEST:8.724 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":303,"completed":170,"skipped":2727,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:55:15.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-7264a420-ad51-4422-9888-a1eda0d32cd6 STEP: Creating a pod to test consume configMaps Oct 5 10:55:15.293: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d44bbbe-1d24-47fd-8759-14af8f8908d4" in namespace "configmap-4028" to be "Succeeded or Failed" Oct 5 10:55:15.315: INFO: Pod "pod-configmaps-6d44bbbe-1d24-47fd-8759-14af8f8908d4": Phase="Pending", Reason="", readiness=false. Elapsed: 20.914293ms Oct 5 10:55:17.460: INFO: Pod "pod-configmaps-6d44bbbe-1d24-47fd-8759-14af8f8908d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.166185751s Oct 5 10:55:19.469: INFO: Pod "pod-configmaps-6d44bbbe-1d24-47fd-8759-14af8f8908d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.175493058s STEP: Saw pod success Oct 5 10:55:19.470: INFO: Pod "pod-configmaps-6d44bbbe-1d24-47fd-8759-14af8f8908d4" satisfied condition "Succeeded or Failed" Oct 5 10:55:19.476: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-6d44bbbe-1d24-47fd-8759-14af8f8908d4 container configmap-volume-test: STEP: delete the pod Oct 5 10:55:19.552: INFO: Waiting for pod pod-configmaps-6d44bbbe-1d24-47fd-8759-14af8f8908d4 to disappear Oct 5 10:55:19.578: INFO: Pod pod-configmaps-6d44bbbe-1d24-47fd-8759-14af8f8908d4 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:55:19.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4028" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":171,"skipped":2729,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:55:19.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5288 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5288;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5288 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5288;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5288.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5288.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5288.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5288.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5288.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5288.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5288.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5288.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5288.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5288.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5288.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 218.216.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.216.218_udp@PTR;check="$$(dig +tcp +noall +answer +search 218.216.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.216.218_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5288 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5288;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5288 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5288;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5288.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5288.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5288.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5288.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5288.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5288.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5288.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5288.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5288.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5288.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5288.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5288.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 218.216.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.216.218_udp@PTR;check="$$(dig +tcp +noall +answer +search 218.216.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.216.218_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 10:55:27.816: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.821: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.833: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.837: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.841: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.844: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.869: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.872: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.876: INFO: Unable to read jessie_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.879: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.883: INFO: Unable to read jessie_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.887: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.891: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.895: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:27.920: INFO: Lookups using dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5288 wheezy_tcp@dns-test-service.dns-5288 wheezy_udp@dns-test-service.dns-5288.svc wheezy_tcp@dns-test-service.dns-5288.svc wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5288 jessie_tcp@dns-test-service.dns-5288 jessie_udp@dns-test-service.dns-5288.svc jessie_tcp@dns-test-service.dns-5288.svc jessie_udp@_http._tcp.dns-test-service.dns-5288.svc jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc] Oct 5 10:55:32.929: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.934: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.940: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.945: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.949: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.953: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.957: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.962: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.992: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:32.997: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:33.002: INFO: Unable to read jessie_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:33.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:33.011: INFO: Unable to read jessie_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:33.015: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:33.020: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:33.025: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:33.110: INFO: Lookups using dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5288 wheezy_tcp@dns-test-service.dns-5288 wheezy_udp@dns-test-service.dns-5288.svc wheezy_tcp@dns-test-service.dns-5288.svc wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5288 jessie_tcp@dns-test-service.dns-5288 jessie_udp@dns-test-service.dns-5288.svc jessie_tcp@dns-test-service.dns-5288.svc jessie_udp@_http._tcp.dns-test-service.dns-5288.svc jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc] Oct 5 10:55:37.928: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:37.934: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:37.940: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:37.946: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:37.950: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:37.955: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:37.959: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:37.963: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:37.995: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:38.000: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:38.004: INFO: Unable to read jessie_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:38.009: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:38.014: INFO: Unable to read jessie_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:38.020: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:38.025: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:38.032: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:38.052: INFO: Lookups using dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5288 wheezy_tcp@dns-test-service.dns-5288 wheezy_udp@dns-test-service.dns-5288.svc wheezy_tcp@dns-test-service.dns-5288.svc wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5288 jessie_tcp@dns-test-service.dns-5288 jessie_udp@dns-test-service.dns-5288.svc jessie_tcp@dns-test-service.dns-5288.svc jessie_udp@_http._tcp.dns-test-service.dns-5288.svc jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc] Oct 5 10:55:42.928: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.933: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.937: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.941: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.949: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.954: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.959: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.988: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.992: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:42.996: INFO: Unable to read jessie_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:43.001: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:43.005: INFO: Unable to read jessie_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:43.009: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:43.013: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:43.017: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:43.043: INFO: Lookups using dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5288 wheezy_tcp@dns-test-service.dns-5288 wheezy_udp@dns-test-service.dns-5288.svc wheezy_tcp@dns-test-service.dns-5288.svc wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5288 jessie_tcp@dns-test-service.dns-5288 jessie_udp@dns-test-service.dns-5288.svc jessie_tcp@dns-test-service.dns-5288.svc jessie_udp@_http._tcp.dns-test-service.dns-5288.svc jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc] Oct 5 10:55:47.926: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.930: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.935: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.938: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.943: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.947: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.951: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.956: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.985: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.990: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.995: INFO: Unable to read jessie_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:47.999: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:48.004: INFO: Unable to read jessie_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:48.008: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:48.012: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:48.016: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:48.042: INFO: Lookups using dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5288 wheezy_tcp@dns-test-service.dns-5288 wheezy_udp@dns-test-service.dns-5288.svc wheezy_tcp@dns-test-service.dns-5288.svc wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5288 jessie_tcp@dns-test-service.dns-5288 jessie_udp@dns-test-service.dns-5288.svc jessie_tcp@dns-test-service.dns-5288.svc jessie_udp@_http._tcp.dns-test-service.dns-5288.svc jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc] Oct 5 10:55:52.927: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.931: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.936: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.940: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.944: INFO: Unable to read wheezy_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.949: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.953: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.957: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.984: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.988: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.993: INFO: Unable to read jessie_udp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:52.997: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288 from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:53.001: INFO: Unable to read jessie_udp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:53.006: INFO: Unable to read jessie_tcp@dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:53.011: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:53.015: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc from pod dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4: the server could not find the requested resource (get pods dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4) Oct 5 10:55:53.040: INFO: Lookups using dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5288 wheezy_tcp@dns-test-service.dns-5288 wheezy_udp@dns-test-service.dns-5288.svc wheezy_tcp@dns-test-service.dns-5288.svc wheezy_udp@_http._tcp.dns-test-service.dns-5288.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5288.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5288 jessie_tcp@dns-test-service.dns-5288 jessie_udp@dns-test-service.dns-5288.svc jessie_tcp@dns-test-service.dns-5288.svc jessie_udp@_http._tcp.dns-test-service.dns-5288.svc jessie_tcp@_http._tcp.dns-test-service.dns-5288.svc] Oct 5 10:55:58.046: INFO: DNS probes using dns-5288/dns-test-5e978ab4-c78e-4b76-a852-6dae5d838ec4 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:55:58.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5288" for this suite. • [SLOW TEST:39.288 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":303,"completed":172,"skipped":2744,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:55:58.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:55:59.218: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8a71b8e4-27bf-48e1-a160-bab7526bc9af", Controller:(*bool)(0x9be36da), BlockOwnerDeletion:(*bool)(0x9be36db)}} Oct 5 10:55:59.257: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6e68a701-2290-48d6-beb2-d2316db2e855", Controller:(*bool)(0x9be38ca), BlockOwnerDeletion:(*bool)(0x9be38cb)}} Oct 5 10:55:59.269: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"29122b5e-a685-4e7a-99e7-ad02c099755a", Controller:(*bool)(0x930c37a), BlockOwnerDeletion:(*bool)(0x930c37b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:56:04.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4054" for this suite. • [SLOW TEST:5.480 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":303,"completed":173,"skipped":2754,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:56:04.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:56:08.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4322" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":174,"skipped":2770,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:56:08.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Oct 5 10:56:08.623: INFO: Pod name pod-release: Found 0 pods out of 1 Oct 5 10:56:13.629: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:56:13.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7177" for this suite. • [SLOW TEST:5.306 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":303,"completed":175,"skipped":2772,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:56:13.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Oct 5 10:56:13.903: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Oct 5 10:56:13.964: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 5 10:56:13.965: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Oct 5 10:56:13.977: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Oct 5 10:56:13.978: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Oct 5 10:56:14.053: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Oct 5 10:56:14.054: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Oct 5 10:56:21.894: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:56:21.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-3109" for this suite. • [SLOW TEST:8.148 seconds] [sig-scheduling] LimitRange /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":303,"completed":176,"skipped":2788,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:56:21.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating secret secrets-1879/secret-test-68af17bd-23a1-499f-a3f4-c83f007a401b STEP: Creating a pod to test consume secrets Oct 5 10:56:22.098: INFO: Waiting up to 5m0s for pod "pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b" in namespace "secrets-1879" to be "Succeeded or Failed" Oct 5 10:56:22.120: INFO: Pod "pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.30376ms Oct 5 10:56:24.128: INFO: Pod "pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029719404s Oct 5 10:56:26.134: INFO: Pod "pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035707606s Oct 5 10:56:28.167: INFO: Pod "pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068181389s STEP: Saw pod success Oct 5 10:56:28.167: INFO: Pod "pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b" satisfied condition "Succeeded or Failed" Oct 5 10:56:28.170: INFO: Trying to get logs from node kali-worker pod pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b container env-test: STEP: delete the pod Oct 5 10:56:28.343: INFO: Waiting for pod pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b to disappear Oct 5 10:56:28.395: INFO: Pod pod-configmaps-4a1eb77a-ef53-4be1-a1d8-e3cdbcfc213b no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:56:28.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1879" for this suite. • [SLOW TEST:6.457 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:36 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":303,"completed":177,"skipped":2845,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:56:28.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:56:37.937: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:56:39.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492197, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492197, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492197, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492197, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 10:56:41.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492197, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492197, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492197, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492197, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:56:45.002: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:56:45.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4143" for this suite. STEP: Destroying namespace "webhook-4143-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.863 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":303,"completed":178,"skipped":2881,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:56:45.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-cfb6d8f8-555d-4547-ab0d-6d00d455dc91 in namespace container-probe-8362 Oct 5 10:56:49.404: INFO: Started pod liveness-cfb6d8f8-555d-4547-ab0d-6d00d455dc91 in namespace container-probe-8362 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 10:56:49.411: INFO: Initial restart count of pod liveness-cfb6d8f8-555d-4547-ab0d-6d00d455dc91 is 0 Oct 5 10:57:13.512: INFO: Restart count of pod container-probe-8362/liveness-cfb6d8f8-555d-4547-ab0d-6d00d455dc91 is now 1 (24.100506529s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:57:13.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8362" for this suite. • [SLOW TEST:28.305 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":179,"skipped":2885,"failed":0} SSSSSSS ------------------------------ [sig-network] Ingress API should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:57:13.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 5 10:57:14.042: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching Oct 5 10:57:14.049: INFO: starting watch STEP: patching STEP: updating Oct 5 10:57:14.082: INFO: waiting for watch events with expected annotations Oct 5 10:57:14.083: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:57:14.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-4460" for this suite. •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":303,"completed":180,"skipped":2892,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:57:14.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Oct 5 10:57:22.541: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 5 10:57:22.573: INFO: Pod pod-with-poststart-exec-hook still exists Oct 5 10:57:24.574: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 5 10:57:24.589: INFO: Pod pod-with-poststart-exec-hook still exists Oct 5 10:57:26.574: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 5 10:57:26.586: INFO: Pod pod-with-poststart-exec-hook still exists Oct 5 10:57:28.574: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Oct 5 10:57:28.583: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:57:28.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5943" for this suite. • [SLOW TEST:14.301 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 when create a pod with lifecycle hook /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":303,"completed":181,"skipped":2948,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:57:28.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 10:57:28.727: INFO: Creating deployment "test-recreate-deployment" Oct 5 10:57:28.733: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Oct 5 10:57:28.775: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Oct 5 10:57:30.840: INFO: Waiting deployment "test-recreate-deployment" to complete Oct 5 10:57:30.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492248, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492248, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492248, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492248, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-c96cf48f\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 10:57:32.852: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Oct 5 10:57:32.867: INFO: Updating deployment test-recreate-deployment Oct 5 10:57:32.867: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 10:57:33.630: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2867 /apis/apps/v1/namespaces/deployment-2867/deployments/test-recreate-deployment 5866633c-f82c-4163-add9-a549176c859d 3171516 2 2020-10-05 10:57:28 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-05 10:57:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 10:57:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9c52688 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-10-05 10:57:33 +0000 UTC,LastTransitionTime:2020-10-05 10:57:33 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2020-10-05 10:57:33 +0000 UTC,LastTransitionTime:2020-10-05 10:57:28 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Oct 5 10:57:33.649: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-2867 /apis/apps/v1/namespaces/deployment-2867/replicasets/test-recreate-deployment-f79dd4667 797d0cb7-853c-430a-af5d-b7eab4ed4e87 3171511 1 2020-10-05 10:57:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 5866633c-f82c-4163-add9-a549176c859d 0x9c52b40 0x9c52b41}] [] [{kube-controller-manager Update apps/v1 2020-10-05 10:57:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5866633c-f82c-4163-add9-a549176c859d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9c52bb8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 10:57:33.649: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Oct 5 10:57:33.650: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-c96cf48f deployment-2867 /apis/apps/v1/namespaces/deployment-2867/replicasets/test-recreate-deployment-c96cf48f 2d0f818a-403a-431a-b760-b00665f91814 3171503 2 2020-10-05 10:57:28 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 5866633c-f82c-4163-add9-a549176c859d 0x9c52a3f 0x9c52a50}] [] [{kube-controller-manager Update apps/v1 2020-10-05 10:57:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5866633c-f82c-4163-add9-a549176c859d\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: c96cf48f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:c96cf48f] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9c52ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 10:57:33.658: INFO: Pod "test-recreate-deployment-f79dd4667-8kpnz" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-8kpnz test-recreate-deployment-f79dd4667- deployment-2867 /api/v1/namespaces/deployment-2867/pods/test-recreate-deployment-f79dd4667-8kpnz a8828423-1a17-4ad3-a3d7-201492004649 3171517 0 2020-10-05 10:57:33 +0000 UTC map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 797d0cb7-853c-430a-af5d-b7eab4ed4e87 0x9c53030 0x9c53031}] [] [{kube-controller-manager Update v1 2020-10-05 10:57:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"797d0cb7-853c-430a-af5d-b7eab4ed4e87\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 10:57:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5llsx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5llsx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5llsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:57:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:57:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:57:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 10:57:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-10-05 10:57:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:57:33.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2867" for this suite. • [SLOW TEST:5.055 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":303,"completed":182,"skipped":2960,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:57:33.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Oct 5 10:57:33.964: INFO: Created pod &Pod{ObjectMeta:{dns-3611 dns-3611 /api/v1/namespaces/dns-3611/pods/dns-3611 2e7e6fee-f091-4d6b-a7cd-431ab2628861 3171524 0 2020-10-05 10:57:33 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-10-05 10:57:33 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d5g4f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d5g4f,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d5g4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 10:57:34.068: INFO: The status of Pod dns-3611 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:57:36.076: INFO: The status of Pod dns-3611 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:57:38.077: INFO: The status of Pod dns-3611 is Pending, waiting for it to be Running (with Ready = true) Oct 5 10:57:40.077: INFO: The status of Pod dns-3611 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Oct 5 10:57:40.078: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3611 PodName:dns-3611 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 10:57:40.078: INFO: >>> kubeConfig: /root/.kube/config I1005 10:57:40.187000 10 log.go:181] (0xc842b60) (0xc842c40) Create stream I1005 10:57:40.187150 10 log.go:181] (0xc842b60) (0xc842c40) Stream added, broadcasting: 1 I1005 10:57:40.191266 10 log.go:181] (0xc842b60) Reply frame received for 1 I1005 10:57:40.191430 10 log.go:181] (0xc842b60) (0xa4ac070) Create stream I1005 10:57:40.191502 10 log.go:181] (0xc842b60) (0xa4ac070) Stream added, broadcasting: 3 I1005 10:57:40.192925 10 log.go:181] (0xc842b60) Reply frame received for 3 I1005 10:57:40.193070 10 log.go:181] (0xc842b60) (0xc8430a0) Create stream I1005 10:57:40.193140 10 log.go:181] (0xc842b60) (0xc8430a0) Stream added, broadcasting: 5 I1005 10:57:40.194547 10 log.go:181] (0xc842b60) Reply frame received for 5 I1005 10:57:40.292124 10 log.go:181] (0xc842b60) Data frame received for 3 I1005 10:57:40.292288 10 log.go:181] (0xa4ac070) (3) Data frame handling I1005 10:57:40.292425 10 log.go:181] (0xa4ac070) (3) Data frame sent I1005 10:57:40.294252 10 log.go:181] (0xc842b60) Data frame received for 5 I1005 10:57:40.294418 10 log.go:181] (0xc8430a0) (5) Data frame handling I1005 10:57:40.294648 10 log.go:181] (0xc842b60) Data frame received for 3 I1005 10:57:40.294742 10 log.go:181] (0xa4ac070) (3) Data frame handling I1005 10:57:40.295723 10 log.go:181] (0xc842b60) Data frame received for 1 I1005 10:57:40.295864 10 log.go:181] (0xc842c40) (1) Data frame handling I1005 10:57:40.296013 10 log.go:181] (0xc842c40) (1) Data frame sent I1005 10:57:40.296238 10 log.go:181] (0xc842b60) (0xc842c40) Stream removed, broadcasting: 1 I1005 10:57:40.296483 10 log.go:181] (0xc842b60) Go away received I1005 10:57:40.296940 10 log.go:181] (0xc842b60) (0xc842c40) Stream removed, broadcasting: 1 I1005 10:57:40.297101 10 log.go:181] (0xc842b60) (0xa4ac070) Stream removed, broadcasting: 3 I1005 10:57:40.297291 10 log.go:181] (0xc842b60) (0xc8430a0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Oct 5 10:57:40.297: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3611 PodName:dns-3611 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 10:57:40.298: INFO: >>> kubeConfig: /root/.kube/config I1005 10:57:40.396378 10 log.go:181] (0xc843960) (0xc843b20) Create stream I1005 10:57:40.396508 10 log.go:181] (0xc843960) (0xc843b20) Stream added, broadcasting: 1 I1005 10:57:40.400165 10 log.go:181] (0xc843960) Reply frame received for 1 I1005 10:57:40.400338 10 log.go:181] (0xc843960) (0xa4acbd0) Create stream I1005 10:57:40.400426 10 log.go:181] (0xc843960) (0xa4acbd0) Stream added, broadcasting: 3 I1005 10:57:40.401752 10 log.go:181] (0xc843960) Reply frame received for 3 I1005 10:57:40.401925 10 log.go:181] (0xc843960) (0xc843f80) Create stream I1005 10:57:40.401991 10 log.go:181] (0xc843960) (0xc843f80) Stream added, broadcasting: 5 I1005 10:57:40.403131 10 log.go:181] (0xc843960) Reply frame received for 5 I1005 10:57:40.486492 10 log.go:181] (0xc843960) Data frame received for 3 I1005 10:57:40.486617 10 log.go:181] (0xa4acbd0) (3) Data frame handling I1005 10:57:40.486745 10 log.go:181] (0xa4acbd0) (3) Data frame sent I1005 10:57:40.487757 10 log.go:181] (0xc843960) Data frame received for 5 I1005 10:57:40.487902 10 log.go:181] (0xc843f80) (5) Data frame handling I1005 10:57:40.487998 10 log.go:181] (0xc843960) Data frame received for 3 I1005 10:57:40.488087 10 log.go:181] (0xa4acbd0) (3) Data frame handling I1005 10:57:40.489264 10 log.go:181] (0xc843960) Data frame received for 1 I1005 10:57:40.489351 10 log.go:181] (0xc843b20) (1) Data frame handling I1005 10:57:40.489441 10 log.go:181] (0xc843b20) (1) Data frame sent I1005 10:57:40.489533 10 log.go:181] (0xc843960) (0xc843b20) Stream removed, broadcasting: 1 I1005 10:57:40.489650 10 log.go:181] (0xc843960) Go away received I1005 10:57:40.490191 10 log.go:181] (0xc843960) (0xc843b20) Stream removed, broadcasting: 1 I1005 10:57:40.490354 10 log.go:181] (0xc843960) (0xa4acbd0) Stream removed, broadcasting: 3 I1005 10:57:40.490507 10 log.go:181] (0xc843960) (0xc843f80) Stream removed, broadcasting: 5 Oct 5 10:57:40.490: INFO: Deleting pod dns-3611... [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:57:40.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3611" for this suite. • [SLOW TEST:6.866 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":303,"completed":183,"skipped":2963,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:57:40.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 10:57:47.368: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 10:57:49.387: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492267, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492267, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492267, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737492267, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 10:57:52.422: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Oct 5 10:57:56.485: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config attach --namespace=webhook-7069 to-be-attached-pod -i -c=container1' Oct 5 10:57:58.024: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:57:58.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7069" for this suite. STEP: Destroying namespace "webhook-7069-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.603 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":303,"completed":184,"skipped":2992,"failed":0} [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:57:58.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1005 10:58:08.341713 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 10:59:10.372: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:59:10.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-194" for this suite. • [SLOW TEST:72.242 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":303,"completed":185,"skipped":2992,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:59:10.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4100 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4100 STEP: creating replication controller externalsvc in namespace services-4100 I1005 10:59:10.645574 10 runners.go:190] Created replication controller with name: externalsvc, namespace: services-4100, replica count: 2 I1005 10:59:13.697054 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 10:59:16.697997 10 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Oct 5 10:59:16.781: INFO: Creating new exec pod Oct 5 10:59:20.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4100 execpodr5bb9 -- /bin/sh -x -c nslookup nodeport-service.services-4100.svc.cluster.local' Oct 5 10:59:22.349: INFO: stderr: "I1005 10:59:22.210056 3030 log.go:181] (0x247bc70) (0x247be30) Create stream\nI1005 10:59:22.213052 3030 log.go:181] (0x247bc70) (0x247be30) Stream added, broadcasting: 1\nI1005 10:59:22.222146 3030 log.go:181] (0x247bc70) Reply frame received for 1\nI1005 10:59:22.222526 3030 log.go:181] (0x247bc70) (0x2ea8070) Create stream\nI1005 10:59:22.222582 3030 log.go:181] (0x247bc70) (0x2ea8070) Stream added, broadcasting: 3\nI1005 10:59:22.223817 3030 log.go:181] (0x247bc70) Reply frame received for 3\nI1005 10:59:22.224015 3030 log.go:181] (0x247bc70) (0x302e070) Create stream\nI1005 10:59:22.224069 3030 log.go:181] (0x247bc70) (0x302e070) Stream added, broadcasting: 5\nI1005 10:59:22.225613 3030 log.go:181] (0x247bc70) Reply frame received for 5\nI1005 10:59:22.312538 3030 log.go:181] (0x247bc70) Data frame received for 5\nI1005 10:59:22.312733 3030 log.go:181] (0x302e070) (5) Data frame handling\nI1005 10:59:22.313146 3030 log.go:181] (0x302e070) (5) Data frame sent\n+ nslookup nodeport-service.services-4100.svc.cluster.local\nI1005 10:59:22.329258 3030 log.go:181] (0x247bc70) Data frame received for 3\nI1005 10:59:22.329458 3030 log.go:181] (0x2ea8070) (3) Data frame handling\nI1005 10:59:22.329660 3030 log.go:181] (0x2ea8070) (3) Data frame sent\nI1005 10:59:22.330067 3030 log.go:181] (0x247bc70) Data frame received for 3\nI1005 10:59:22.330191 3030 log.go:181] (0x2ea8070) (3) Data frame handling\nI1005 10:59:22.330298 3030 log.go:181] (0x2ea8070) (3) Data frame sent\nI1005 10:59:22.330596 3030 log.go:181] (0x247bc70) Data frame received for 3\nI1005 10:59:22.330760 3030 log.go:181] (0x2ea8070) (3) Data frame handling\nI1005 10:59:22.330917 3030 log.go:181] (0x247bc70) Data frame received for 5\nI1005 10:59:22.331050 3030 log.go:181] (0x302e070) (5) Data frame handling\nI1005 10:59:22.333534 3030 log.go:181] (0x247bc70) Data frame received for 1\nI1005 10:59:22.333673 3030 log.go:181] (0x247be30) (1) Data frame handling\nI1005 10:59:22.333807 3030 log.go:181] (0x247be30) (1) Data frame sent\nI1005 10:59:22.335584 3030 log.go:181] (0x247bc70) (0x247be30) Stream removed, broadcasting: 1\nI1005 10:59:22.336716 3030 log.go:181] (0x247bc70) Go away received\nI1005 10:59:22.340451 3030 log.go:181] (0x247bc70) (0x247be30) Stream removed, broadcasting: 1\nI1005 10:59:22.340751 3030 log.go:181] (0x247bc70) (0x2ea8070) Stream removed, broadcasting: 3\nI1005 10:59:22.341051 3030 log.go:181] (0x247bc70) (0x302e070) Stream removed, broadcasting: 5\n" Oct 5 10:59:22.350: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4100.svc.cluster.local\tcanonical name = externalsvc.services-4100.svc.cluster.local.\nName:\texternalsvc.services-4100.svc.cluster.local\nAddress: 10.108.240.78\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4100, will wait for the garbage collector to delete the pods Oct 5 10:59:22.416: INFO: Deleting ReplicationController externalsvc took: 9.543108ms Oct 5 10:59:22.817: INFO: Terminating ReplicationController externalsvc pods took: 400.924426ms Oct 5 10:59:38.738: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 10:59:38.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4100" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:28.371 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":303,"completed":186,"skipped":3011,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 10:59:38.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod busybox-4d8530c3-b752-4011-91dd-1447cbf71f09 in namespace container-probe-9319 Oct 5 10:59:42.889: INFO: Started pod busybox-4d8530c3-b752-4011-91dd-1447cbf71f09 in namespace container-probe-9319 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 10:59:42.895: INFO: Initial restart count of pod busybox-4d8530c3-b752-4011-91dd-1447cbf71f09 is 0 Oct 5 11:00:29.176: INFO: Restart count of pod container-probe-9319/busybox-4d8530c3-b752-4011-91dd-1447cbf71f09 is now 1 (46.280975873s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:00:29.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9319" for this suite. • [SLOW TEST:50.501 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":303,"completed":187,"skipped":3032,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:00:29.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-5c2351a1-0ebd-4871-b34f-b96329f12f72 STEP: Creating a pod to test consume secrets Oct 5 11:00:29.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b8208fdf-eec8-4463-9617-c0ae7b3bc1f8" in namespace "projected-7814" to be "Succeeded or Failed" Oct 5 11:00:29.477: INFO: Pod "pod-projected-secrets-b8208fdf-eec8-4463-9617-c0ae7b3bc1f8": Phase="Pending", Reason="", readiness=false. Elapsed: 76.706348ms Oct 5 11:00:31.519: INFO: Pod "pod-projected-secrets-b8208fdf-eec8-4463-9617-c0ae7b3bc1f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118752387s Oct 5 11:00:33.527: INFO: Pod "pod-projected-secrets-b8208fdf-eec8-4463-9617-c0ae7b3bc1f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.127065084s STEP: Saw pod success Oct 5 11:00:33.527: INFO: Pod "pod-projected-secrets-b8208fdf-eec8-4463-9617-c0ae7b3bc1f8" satisfied condition "Succeeded or Failed" Oct 5 11:00:33.533: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-b8208fdf-eec8-4463-9617-c0ae7b3bc1f8 container projected-secret-volume-test: STEP: delete the pod Oct 5 11:00:33.567: INFO: Waiting for pod pod-projected-secrets-b8208fdf-eec8-4463-9617-c0ae7b3bc1f8 to disappear Oct 5 11:00:33.595: INFO: Pod pod-projected-secrets-b8208fdf-eec8-4463-9617-c0ae7b3bc1f8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:00:33.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7814" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":188,"skipped":3032,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:00:33.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-60a4749a-1575-4841-9b58-ddc41bf02d1c STEP: Creating a pod to test consume configMaps Oct 5 11:00:33.734: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-79db8820-61ff-459d-be3e-49d41687084d" in namespace "projected-9016" to be "Succeeded or Failed" Oct 5 11:00:33.763: INFO: Pod "pod-projected-configmaps-79db8820-61ff-459d-be3e-49d41687084d": Phase="Pending", Reason="", readiness=false. Elapsed: 28.161577ms Oct 5 11:00:35.770: INFO: Pod "pod-projected-configmaps-79db8820-61ff-459d-be3e-49d41687084d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035179424s Oct 5 11:00:37.779: INFO: Pod "pod-projected-configmaps-79db8820-61ff-459d-be3e-49d41687084d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045144926s STEP: Saw pod success Oct 5 11:00:37.780: INFO: Pod "pod-projected-configmaps-79db8820-61ff-459d-be3e-49d41687084d" satisfied condition "Succeeded or Failed" Oct 5 11:00:37.785: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-79db8820-61ff-459d-be3e-49d41687084d container projected-configmap-volume-test: STEP: delete the pod Oct 5 11:00:37.832: INFO: Waiting for pod pod-projected-configmaps-79db8820-61ff-459d-be3e-49d41687084d to disappear Oct 5 11:00:37.844: INFO: Pod pod-projected-configmaps-79db8820-61ff-459d-be3e-49d41687084d no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:00:37.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9016" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":189,"skipped":3051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:00:37.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:00:38.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4100" for this suite. •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":190,"skipped":3077,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:00:38.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 11:00:38.279: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 11:00:38.299: INFO: Waiting for terminating namespaces to be deleted... Oct 5 11:00:38.303: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 5 11:00:38.336: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 11:00:38.336: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 11:00:38.336: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 11:00:38.336: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 11:00:38.336: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 5 11:00:38.346: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 11:00:38.346: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 11:00:38.346: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 11:00:38.346: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-14cac174-2ed1-4f28-b920-6ea9b85a71d3 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-14cac174-2ed1-4f28-b920-6ea9b85a71d3 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-14cac174-2ed1-4f28-b920-6ea9b85a71d3 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:00:54.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5930" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:16.406 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":303,"completed":191,"skipped":3172,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:00:54.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:00:54.688: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Oct 5 11:01:05.417: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5004 create -f -' Oct 5 11:01:10.891: INFO: stderr: "" Oct 5 11:01:10.891: INFO: stdout: "e2e-test-crd-publish-openapi-5150-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 5 11:01:10.892: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5004 delete e2e-test-crd-publish-openapi-5150-crds test-foo' Oct 5 11:01:12.114: INFO: stderr: "" Oct 5 11:01:12.114: INFO: stdout: "e2e-test-crd-publish-openapi-5150-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Oct 5 11:01:12.114: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5004 apply -f -' Oct 5 11:01:15.267: INFO: stderr: "" Oct 5 11:01:15.267: INFO: stdout: "e2e-test-crd-publish-openapi-5150-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Oct 5 11:01:15.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5004 delete e2e-test-crd-publish-openapi-5150-crds test-foo' Oct 5 11:01:16.622: INFO: stderr: "" Oct 5 11:01:16.622: INFO: stdout: "e2e-test-crd-publish-openapi-5150-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Oct 5 11:01:16.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5004 create -f -' Oct 5 11:01:18.789: INFO: rc: 1 Oct 5 11:01:18.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5004 apply -f -' Oct 5 11:01:21.224: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Oct 5 11:01:21.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5004 create -f -' Oct 5 11:01:24.194: INFO: rc: 1 Oct 5 11:01:24.194: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5004 apply -f -' Oct 5 11:01:26.619: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Oct 5 11:01:26.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5150-crds' Oct 5 11:01:29.102: INFO: stderr: "" Oct 5 11:01:29.102: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5150-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Oct 5 11:01:29.105: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5150-crds.metadata' Oct 5 11:01:31.898: INFO: stderr: "" Oct 5 11:01:31.898: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5150-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Oct 5 11:01:31.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5150-crds.spec' Oct 5 11:01:35.014: INFO: stderr: "" Oct 5 11:01:35.014: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5150-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Oct 5 11:01:35.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5150-crds.spec.bars' Oct 5 11:01:37.023: INFO: stderr: "" Oct 5 11:01:37.023: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5150-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Oct 5 11:01:37.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5150-crds.spec.bars2' Oct 5 11:01:39.436: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:01:50.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5004" for this suite. • [SLOW TEST:55.420 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":303,"completed":192,"skipped":3233,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:01:50.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:02:06.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2276" for this suite. • [SLOW TEST:16.229 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":303,"completed":193,"skipped":3240,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:02:06.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:02:06.321: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:02:06.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1329" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":303,"completed":194,"skipped":3246,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:02:07.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-e7d0c163-36bc-4a55-89ea-1ee42c60c9ce STEP: Creating a pod to test consume configMaps Oct 5 11:02:07.104: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1c373e8-27df-4a73-82c5-b8d1d3bbf886" in namespace "projected-4703" to be "Succeeded or Failed" Oct 5 11:02:07.147: INFO: Pod "pod-projected-configmaps-d1c373e8-27df-4a73-82c5-b8d1d3bbf886": Phase="Pending", Reason="", readiness=false. Elapsed: 42.385454ms Oct 5 11:02:09.154: INFO: Pod "pod-projected-configmaps-d1c373e8-27df-4a73-82c5-b8d1d3bbf886": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049365853s Oct 5 11:02:11.162: INFO: Pod "pod-projected-configmaps-d1c373e8-27df-4a73-82c5-b8d1d3bbf886": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058108766s STEP: Saw pod success Oct 5 11:02:11.163: INFO: Pod "pod-projected-configmaps-d1c373e8-27df-4a73-82c5-b8d1d3bbf886" satisfied condition "Succeeded or Failed" Oct 5 11:02:11.168: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-d1c373e8-27df-4a73-82c5-b8d1d3bbf886 container projected-configmap-volume-test: STEP: delete the pod Oct 5 11:02:11.344: INFO: Waiting for pod pod-projected-configmaps-d1c373e8-27df-4a73-82c5-b8d1d3bbf886 to disappear Oct 5 11:02:11.413: INFO: Pod pod-projected-configmaps-d1c373e8-27df-4a73-82c5-b8d1d3bbf886 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:02:11.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4703" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":195,"skipped":3268,"failed":0} SSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:02:11.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:02:15.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2470" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":303,"completed":196,"skipped":3271,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:02:15.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:02:15.802: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:02:17.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9606" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":303,"completed":197,"skipped":3284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:02:17.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0644 on tmpfs Oct 5 11:02:17.183: INFO: Waiting up to 5m0s for pod "pod-50d443d6-6c49-40c1-b3a7-8a9c0b1ea496" in namespace "emptydir-9192" to be "Succeeded or Failed" Oct 5 11:02:17.199: INFO: Pod "pod-50d443d6-6c49-40c1-b3a7-8a9c0b1ea496": Phase="Pending", Reason="", readiness=false. Elapsed: 15.34922ms Oct 5 11:02:19.208: INFO: Pod "pod-50d443d6-6c49-40c1-b3a7-8a9c0b1ea496": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024760599s Oct 5 11:02:21.216: INFO: Pod "pod-50d443d6-6c49-40c1-b3a7-8a9c0b1ea496": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032848108s STEP: Saw pod success Oct 5 11:02:21.217: INFO: Pod "pod-50d443d6-6c49-40c1-b3a7-8a9c0b1ea496" satisfied condition "Succeeded or Failed" Oct 5 11:02:21.222: INFO: Trying to get logs from node kali-worker2 pod pod-50d443d6-6c49-40c1-b3a7-8a9c0b1ea496 container test-container: STEP: delete the pod Oct 5 11:02:21.446: INFO: Waiting for pod pod-50d443d6-6c49-40c1-b3a7-8a9c0b1ea496 to disappear Oct 5 11:02:21.455: INFO: Pod pod-50d443d6-6c49-40c1-b3a7-8a9c0b1ea496 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:02:21.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9192" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":198,"skipped":3324,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:02:21.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 STEP: Creating service test in namespace statefulset-9191 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a new StatefulSet Oct 5 11:02:22.067: INFO: Found 0 stateful pods, waiting for 3 Oct 5 11:02:32.077: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 11:02:32.077: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 11:02:32.077: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Oct 5 11:02:42.077: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 11:02:42.077: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 11:02:42.077: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Oct 5 11:02:42.120: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Oct 5 11:02:52.197: INFO: Updating stateful set ss2 Oct 5 11:02:52.310: INFO: Waiting for Pod statefulset-9191/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Oct 5 11:03:02.886: INFO: Found 2 stateful pods, waiting for 3 Oct 5 11:03:12.896: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Oct 5 11:03:12.896: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Oct 5 11:03:12.896: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Oct 5 11:03:12.931: INFO: Updating stateful set ss2 Oct 5 11:03:13.003: INFO: Waiting for Pod statefulset-9191/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Oct 5 11:03:23.046: INFO: Updating stateful set ss2 Oct 5 11:03:23.076: INFO: Waiting for StatefulSet statefulset-9191/ss2 to complete update Oct 5 11:03:23.077: INFO: Waiting for Pod statefulset-9191/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Oct 5 11:03:33.090: INFO: Deleting all statefulset in ns statefulset-9191 Oct 5 11:03:33.095: INFO: Scaling statefulset ss2 to 0 Oct 5 11:03:53.122: INFO: Waiting for statefulset status.replicas updated to 0 Oct 5 11:03:53.128: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:03:53.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9191" for this suite. • [SLOW TEST:91.711 seconds] [sig-apps] StatefulSet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":303,"completed":199,"skipped":3351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:03:53.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-upd-5811164a-c237-45fe-94f3-7ee1c58ce7ff STEP: Creating the pod STEP: Updating configmap configmap-test-upd-5811164a-c237-45fe-94f3-7ee1c58ce7ff STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:03:59.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7615" for this suite. • [SLOW TEST:6.250 seconds] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":200,"skipped":3383,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:03:59.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-4984 STEP: creating service affinity-nodeport in namespace services-4984 STEP: creating replication controller affinity-nodeport in namespace services-4984 I1005 11:03:59.566870 10 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-4984, replica count: 3 I1005 11:04:02.618096 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:04:05.618767 10 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 11:04:05.638: INFO: Creating new exec pod Oct 5 11:04:10.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4984 execpod-affinity4xknr -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Oct 5 11:04:12.227: INFO: stderr: "I1005 11:04:12.118822 3313 log.go:181] (0x247d420) (0x247d490) Create stream\nI1005 11:04:12.121009 3313 log.go:181] (0x247d420) (0x247d490) Stream added, broadcasting: 1\nI1005 11:04:12.129223 3313 log.go:181] (0x247d420) Reply frame received for 1\nI1005 11:04:12.129896 3313 log.go:181] (0x247d420) (0x26fc4d0) Create stream\nI1005 11:04:12.129993 3313 log.go:181] (0x247d420) (0x26fc4d0) Stream added, broadcasting: 3\nI1005 11:04:12.131597 3313 log.go:181] (0x247d420) Reply frame received for 3\nI1005 11:04:12.132035 3313 log.go:181] (0x247d420) (0x247d9d0) Create stream\nI1005 11:04:12.132128 3313 log.go:181] (0x247d420) (0x247d9d0) Stream added, broadcasting: 5\nI1005 11:04:12.133617 3313 log.go:181] (0x247d420) Reply frame received for 5\nI1005 11:04:12.208005 3313 log.go:181] (0x247d420) Data frame received for 5\nI1005 11:04:12.208380 3313 log.go:181] (0x247d420) Data frame received for 3\nI1005 11:04:12.208574 3313 log.go:181] (0x26fc4d0) (3) Data frame handling\nI1005 11:04:12.208688 3313 log.go:181] (0x247d9d0) (5) Data frame handling\nI1005 11:04:12.209543 3313 log.go:181] (0x247d420) Data frame received for 1\nI1005 11:04:12.209644 3313 log.go:181] (0x247d490) (1) Data frame handling\nI1005 11:04:12.210027 3313 log.go:181] (0x247d9d0) (5) Data frame sent\n+ nc -zv -t -w 2 affinity-nodeport 80\nI1005 11:04:12.210271 3313 log.go:181] (0x247d490) (1) Data frame sent\nI1005 11:04:12.210341 3313 log.go:181] (0x247d420) Data frame received for 5\nI1005 11:04:12.210433 3313 log.go:181] (0x247d9d0) (5) Data frame handling\nI1005 11:04:12.211578 3313 log.go:181] (0x247d9d0) (5) Data frame sent\nI1005 11:04:12.211673 3313 log.go:181] (0x247d420) Data frame received for 5\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\nI1005 11:04:12.212069 3313 log.go:181] (0x247d420) (0x247d490) Stream removed, broadcasting: 1\nI1005 11:04:12.212963 3313 log.go:181] (0x247d9d0) (5) Data frame handling\nI1005 11:04:12.214511 3313 log.go:181] (0x247d420) Go away received\nI1005 11:04:12.216981 3313 log.go:181] (0x247d420) (0x247d490) Stream removed, broadcasting: 1\nI1005 11:04:12.217260 3313 log.go:181] (0x247d420) (0x26fc4d0) Stream removed, broadcasting: 3\nI1005 11:04:12.217735 3313 log.go:181] (0x247d420) (0x247d9d0) Stream removed, broadcasting: 5\n" Oct 5 11:04:12.228: INFO: stdout: "" Oct 5 11:04:12.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4984 execpod-affinity4xknr -- /bin/sh -x -c nc -zv -t -w 2 10.111.173.198 80' Oct 5 11:04:13.771: INFO: stderr: "I1005 11:04:13.633112 3333 log.go:181] (0x279e000) (0x279f1f0) Create stream\nI1005 11:04:13.637788 3333 log.go:181] (0x279e000) (0x279f1f0) Stream added, broadcasting: 1\nI1005 11:04:13.658644 3333 log.go:181] (0x279e000) Reply frame received for 1\nI1005 11:04:13.659229 3333 log.go:181] (0x279e000) (0x2512bd0) Create stream\nI1005 11:04:13.659306 3333 log.go:181] (0x279e000) (0x2512bd0) Stream added, broadcasting: 3\nI1005 11:04:13.660962 3333 log.go:181] (0x279e000) Reply frame received for 3\nI1005 11:04:13.661252 3333 log.go:181] (0x279e000) (0x2f98070) Create stream\nI1005 11:04:13.661350 3333 log.go:181] (0x279e000) (0x2f98070) Stream added, broadcasting: 5\nI1005 11:04:13.662484 3333 log.go:181] (0x279e000) Reply frame received for 5\nI1005 11:04:13.753761 3333 log.go:181] (0x279e000) Data frame received for 3\nI1005 11:04:13.754122 3333 log.go:181] (0x279e000) Data frame received for 5\nI1005 11:04:13.754365 3333 log.go:181] (0x2512bd0) (3) Data frame handling\nI1005 11:04:13.754651 3333 log.go:181] (0x2f98070) (5) Data frame handling\nI1005 11:04:13.754890 3333 log.go:181] (0x279e000) Data frame received for 1\nI1005 11:04:13.755061 3333 log.go:181] (0x279f1f0) (1) Data frame handling\nI1005 11:04:13.755317 3333 log.go:181] (0x2f98070) (5) Data frame sent\nI1005 11:04:13.755657 3333 log.go:181] (0x279f1f0) (1) Data frame sent\n+ nc -zv -t -w 2 10.111.173.198 80\nConnection to 10.111.173.198 80 port [tcp/http] succeeded!\nI1005 11:04:13.756711 3333 log.go:181] (0x279e000) Data frame received for 5\nI1005 11:04:13.756824 3333 log.go:181] (0x2f98070) (5) Data frame handling\nI1005 11:04:13.758407 3333 log.go:181] (0x279e000) (0x279f1f0) Stream removed, broadcasting: 1\nI1005 11:04:13.760255 3333 log.go:181] (0x279e000) Go away received\nI1005 11:04:13.762795 3333 log.go:181] (0x279e000) (0x279f1f0) Stream removed, broadcasting: 1\nI1005 11:04:13.762976 3333 log.go:181] (0x279e000) (0x2512bd0) Stream removed, broadcasting: 3\nI1005 11:04:13.763120 3333 log.go:181] (0x279e000) (0x2f98070) Stream removed, broadcasting: 5\n" Oct 5 11:04:13.772: INFO: stdout: "" Oct 5 11:04:13.773: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4984 execpod-affinity4xknr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 31136' Oct 5 11:04:15.267: INFO: stderr: "I1005 11:04:15.168258 3354 log.go:181] (0x28c4150) (0x28c41c0) Create stream\nI1005 11:04:15.170097 3354 log.go:181] (0x28c4150) (0x28c41c0) Stream added, broadcasting: 1\nI1005 11:04:15.180956 3354 log.go:181] (0x28c4150) Reply frame received for 1\nI1005 11:04:15.181630 3354 log.go:181] (0x28c4150) (0x28c4620) Create stream\nI1005 11:04:15.181706 3354 log.go:181] (0x28c4150) (0x28c4620) Stream added, broadcasting: 3\nI1005 11:04:15.183131 3354 log.go:181] (0x28c4150) Reply frame received for 3\nI1005 11:04:15.183349 3354 log.go:181] (0x28c4150) (0x2ac8070) Create stream\nI1005 11:04:15.183405 3354 log.go:181] (0x28c4150) (0x2ac8070) Stream added, broadcasting: 5\nI1005 11:04:15.184690 3354 log.go:181] (0x28c4150) Reply frame received for 5\nI1005 11:04:15.248406 3354 log.go:181] (0x28c4150) Data frame received for 5\nI1005 11:04:15.248700 3354 log.go:181] (0x2ac8070) (5) Data frame handling\nI1005 11:04:15.248971 3354 log.go:181] (0x28c4150) Data frame received for 3\nI1005 11:04:15.249155 3354 log.go:181] (0x28c4620) (3) Data frame handling\nI1005 11:04:15.249800 3354 log.go:181] (0x28c4150) Data frame received for 1\nI1005 11:04:15.249956 3354 log.go:181] (0x28c41c0) (1) Data frame handling\nI1005 11:04:15.250266 3354 log.go:181] (0x28c41c0) (1) Data frame sent\n+ nc -zv -t -w 2 172.18.0.12 31136\nConnection to 172.18.0.12 31136 port [tcp/31136] succeeded!\nI1005 11:04:15.250513 3354 log.go:181] (0x2ac8070) (5) Data frame sent\nI1005 11:04:15.251733 3354 log.go:181] (0x28c4150) Data frame received for 5\nI1005 11:04:15.251888 3354 log.go:181] (0x2ac8070) (5) Data frame handling\nI1005 11:04:15.253213 3354 log.go:181] (0x28c4150) (0x28c41c0) Stream removed, broadcasting: 1\nI1005 11:04:15.254281 3354 log.go:181] (0x28c4150) Go away received\nI1005 11:04:15.257627 3354 log.go:181] (0x28c4150) (0x28c41c0) Stream removed, broadcasting: 1\nI1005 11:04:15.257829 3354 log.go:181] (0x28c4150) (0x28c4620) Stream removed, broadcasting: 3\nI1005 11:04:15.257990 3354 log.go:181] (0x28c4150) (0x2ac8070) Stream removed, broadcasting: 5\n" Oct 5 11:04:15.268: INFO: stdout: "" Oct 5 11:04:15.268: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4984 execpod-affinity4xknr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31136' Oct 5 11:04:16.733: INFO: stderr: "I1005 11:04:16.613504 3374 log.go:181] (0x2838000) (0x2838070) Create stream\nI1005 11:04:16.616326 3374 log.go:181] (0x2838000) (0x2838070) Stream added, broadcasting: 1\nI1005 11:04:16.626264 3374 log.go:181] (0x2838000) Reply frame received for 1\nI1005 11:04:16.626677 3374 log.go:181] (0x2838000) (0x28383f0) Create stream\nI1005 11:04:16.626733 3374 log.go:181] (0x2838000) (0x28383f0) Stream added, broadcasting: 3\nI1005 11:04:16.628357 3374 log.go:181] (0x2838000) Reply frame received for 3\nI1005 11:04:16.628785 3374 log.go:181] (0x2838000) (0x2517420) Create stream\nI1005 11:04:16.628931 3374 log.go:181] (0x2838000) (0x2517420) Stream added, broadcasting: 5\nI1005 11:04:16.630291 3374 log.go:181] (0x2838000) Reply frame received for 5\nI1005 11:04:16.715180 3374 log.go:181] (0x2838000) Data frame received for 3\nI1005 11:04:16.715563 3374 log.go:181] (0x28383f0) (3) Data frame handling\nI1005 11:04:16.715819 3374 log.go:181] (0x2838000) Data frame received for 5\nI1005 11:04:16.716046 3374 log.go:181] (0x2517420) (5) Data frame handling\nI1005 11:04:16.716567 3374 log.go:181] (0x2838000) Data frame received for 1\nI1005 11:04:16.716758 3374 log.go:181] (0x2838070) (1) Data frame handling\nI1005 11:04:16.717536 3374 log.go:181] (0x2838070) (1) Data frame sent\nI1005 11:04:16.718160 3374 log.go:181] (0x2517420) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 31136\nConnection to 172.18.0.13 31136 port [tcp/31136] succeeded!\nI1005 11:04:16.718330 3374 log.go:181] (0x2838000) Data frame received for 5\nI1005 11:04:16.718404 3374 log.go:181] (0x2517420) (5) Data frame handling\nI1005 11:04:16.719423 3374 log.go:181] (0x2838000) (0x2838070) Stream removed, broadcasting: 1\nI1005 11:04:16.721387 3374 log.go:181] (0x2838000) Go away received\nI1005 11:04:16.723846 3374 log.go:181] (0x2838000) (0x2838070) Stream removed, broadcasting: 1\nI1005 11:04:16.724047 3374 log.go:181] (0x2838000) (0x28383f0) Stream removed, broadcasting: 3\nI1005 11:04:16.724210 3374 log.go:181] (0x2838000) (0x2517420) Stream removed, broadcasting: 5\n" Oct 5 11:04:16.734: INFO: stdout: "" Oct 5 11:04:16.735: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-4984 execpod-affinity4xknr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:31136/ ; done' Oct 5 11:04:18.295: INFO: stderr: "I1005 11:04:18.093314 3394 log.go:181] (0x247a850) (0x247b880) Create stream\nI1005 11:04:18.095305 3394 log.go:181] (0x247a850) (0x247b880) Stream added, broadcasting: 1\nI1005 11:04:18.111087 3394 log.go:181] (0x247a850) Reply frame received for 1\nI1005 11:04:18.111765 3394 log.go:181] (0x247a850) (0x2922150) Create stream\nI1005 11:04:18.111873 3394 log.go:181] (0x247a850) (0x2922150) Stream added, broadcasting: 3\nI1005 11:04:18.113853 3394 log.go:181] (0x247a850) Reply frame received for 3\nI1005 11:04:18.114145 3394 log.go:181] (0x247a850) (0x2f88070) Create stream\nI1005 11:04:18.114224 3394 log.go:181] (0x247a850) (0x2f88070) Stream added, broadcasting: 5\nI1005 11:04:18.115477 3394 log.go:181] (0x247a850) Reply frame received for 5\nI1005 11:04:18.192026 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.192358 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.192632 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.192886 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.193401 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.193519 3394 log.go:181] (0x2f88070) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.198577 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.198654 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.198740 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.199459 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.199697 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.199942 3394 log.go:181] (0x2f88070) (5) Data frame sent\n+ echo\n+ curl -q -sI1005 11:04:18.200135 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.200337 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.200505 3394 log.go:181] (0x247a850) Data frame received for 3\n --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.200633 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.200780 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.201046 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.202109 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.202196 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.202289 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.202700 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.202787 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.202876 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.202995 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.203078 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.203171 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.208075 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.208164 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.208257 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.209004 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.209097 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.209180 3394 log.go:181] (0x2f88070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeoutI1005 11:04:18.209262 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.209332 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.209416 3394 log.go:181] (0x2f88070) (5) Data frame sent\n 2 http://172.18.0.12:31136/\nI1005 11:04:18.209493 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.209563 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.209652 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.214739 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.214819 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.214903 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.215598 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.215692 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.215771 3394 log.go:181] (0x2f88070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2I1005 11:04:18.215866 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.215978 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.216166 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.216324 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.216452 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.216608 3394 log.go:181] (0x2f88070) (5) Data frame sent\n http://172.18.0.12:31136/\nI1005 11:04:18.220709 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.220982 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.221165 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.221490 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.221598 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.221695 3394 log.go:181] (0x2f88070) (5) Data frame sent\n+ echo\nI1005 11:04:18.221785 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.221879 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.221975 3394 log.go:181] (0x2f88070) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.222059 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.222252 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.222352 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.225662 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.225791 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.225922 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.226237 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.226400 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.226663 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.226868 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.226991 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.227131 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.231142 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.231257 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.231408 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.232600 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.233058 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.233230 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.233357 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.233457 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.233556 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.236581 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.236685 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.236798 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.237000 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.237101 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.237220 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.237379 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.237478 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.237585 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.241978 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.242064 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.242167 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.242856 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.242979 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.243107 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.243278 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.243369 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.243497 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.247072 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.247178 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.247303 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.247626 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.247721 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.247814 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.248176 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.248277 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.248418 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.251213 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.251312 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.251384 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.252073 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.252133 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.252220 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.252422 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.252585 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.252714 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.255752 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.255852 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.255936 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.256024 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.256099 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.256216 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.256366 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.256478 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.256571 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.261059 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.261192 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.261334 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.261448 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.261536 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.261612 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.261676 3394 log.go:181] (0x2f88070) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.261738 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.261884 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.265756 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.265862 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.266030 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.266413 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.266487 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.266608 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.266717 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.266818 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.266978 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.270361 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.270524 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.270663 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.270776 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.270885 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.271013 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.271188 3394 log.go:181] (0x2f88070) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:31136/\nI1005 11:04:18.271358 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.271523 3394 log.go:181] (0x2f88070) (5) Data frame sent\nI1005 11:04:18.276383 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.276509 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.276643 3394 log.go:181] (0x2922150) (3) Data frame sent\nI1005 11:04:18.278070 3394 log.go:181] (0x247a850) Data frame received for 5\nI1005 11:04:18.278200 3394 log.go:181] (0x247a850) Data frame received for 3\nI1005 11:04:18.278406 3394 log.go:181] (0x2922150) (3) Data frame handling\nI1005 11:04:18.278533 3394 log.go:181] (0x2f88070) (5) Data frame handling\nI1005 11:04:18.279521 3394 log.go:181] (0x247a850) Data frame received for 1\nI1005 11:04:18.279638 3394 log.go:181] (0x247b880) (1) Data frame handling\nI1005 11:04:18.279834 3394 log.go:181] (0x247b880) (1) Data frame sent\nI1005 11:04:18.280394 3394 log.go:181] (0x247a850) (0x247b880) Stream removed, broadcasting: 1\nI1005 11:04:18.282294 3394 log.go:181] (0x247a850) Go away received\nI1005 11:04:18.285864 3394 log.go:181] (0x247a850) (0x247b880) Stream removed, broadcasting: 1\nI1005 11:04:18.286079 3394 log.go:181] (0x247a850) (0x2922150) Stream removed, broadcasting: 3\nI1005 11:04:18.286230 3394 log.go:181] (0x247a850) (0x2f88070) Stream removed, broadcasting: 5\n" Oct 5 11:04:18.299: INFO: stdout: "\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq\naffinity-nodeport-wmhmq" Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.299: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.300: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.300: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.300: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.300: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.300: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.300: INFO: Received response from host: affinity-nodeport-wmhmq Oct 5 11:04:18.300: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-4984, will wait for the garbage collector to delete the pods Oct 5 11:04:18.452: INFO: Deleting ReplicationController affinity-nodeport took: 7.948574ms Oct 5 11:04:18.953: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.993464ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:04:28.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4984" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:29.396 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":201,"skipped":3399,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:04:28.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-337d4443-2ed0-4664-b62d-e39fc1d52054 in namespace container-probe-7614 Oct 5 11:04:32.965: INFO: Started pod liveness-337d4443-2ed0-4664-b62d-e39fc1d52054 in namespace container-probe-7614 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 11:04:32.970: INFO: Initial restart count of pod liveness-337d4443-2ed0-4664-b62d-e39fc1d52054 is 0 Oct 5 11:04:49.056: INFO: Restart count of pod container-probe-7614/liveness-337d4443-2ed0-4664-b62d-e39fc1d52054 is now 1 (16.086334685s elapsed) Oct 5 11:05:09.154: INFO: Restart count of pod container-probe-7614/liveness-337d4443-2ed0-4664-b62d-e39fc1d52054 is now 2 (36.184548044s elapsed) Oct 5 11:05:29.243: INFO: Restart count of pod container-probe-7614/liveness-337d4443-2ed0-4664-b62d-e39fc1d52054 is now 3 (56.273252762s elapsed) Oct 5 11:05:47.563: INFO: Restart count of pod container-probe-7614/liveness-337d4443-2ed0-4664-b62d-e39fc1d52054 is now 4 (1m14.592889083s elapsed) Oct 5 11:06:47.830: INFO: Restart count of pod container-probe-7614/liveness-337d4443-2ed0-4664-b62d-e39fc1d52054 is now 5 (2m14.860380602s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:06:47.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7614" for this suite. • [SLOW TEST:139.059 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":303,"completed":202,"skipped":3432,"failed":0} S ------------------------------ [sig-api-machinery] Events should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:06:47.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create set of events Oct 5 11:06:48.342: INFO: created test-event-1 Oct 5 11:06:48.359: INFO: created test-event-2 Oct 5 11:06:48.386: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events Oct 5 11:06:48.401: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity Oct 5 11:06:48.475: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:06:48.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4989" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":303,"completed":203,"skipped":3433,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:06:48.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating pod Oct 5 11:06:52.883: INFO: Pod pod-hostip-ba3887ec-731c-45a6-857b-3df6535e0abb has hostIP: 172.18.0.12 [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:06:52.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7034" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":303,"completed":204,"skipped":3439,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:06:52.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-c991b770-e4f9-4dcf-ae51-65dd39e58205 STEP: Creating a pod to test consume configMaps Oct 5 11:06:53.007: INFO: Waiting up to 5m0s for pod "pod-configmaps-1041ca66-a290-4eea-8611-f32091f7e188" in namespace "configmap-9533" to be "Succeeded or Failed" Oct 5 11:06:53.036: INFO: Pod "pod-configmaps-1041ca66-a290-4eea-8611-f32091f7e188": Phase="Pending", Reason="", readiness=false. Elapsed: 28.207214ms Oct 5 11:06:55.101: INFO: Pod "pod-configmaps-1041ca66-a290-4eea-8611-f32091f7e188": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09402928s Oct 5 11:06:57.109: INFO: Pod "pod-configmaps-1041ca66-a290-4eea-8611-f32091f7e188": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101895404s STEP: Saw pod success Oct 5 11:06:57.110: INFO: Pod "pod-configmaps-1041ca66-a290-4eea-8611-f32091f7e188" satisfied condition "Succeeded or Failed" Oct 5 11:06:57.114: INFO: Trying to get logs from node kali-worker pod pod-configmaps-1041ca66-a290-4eea-8611-f32091f7e188 container configmap-volume-test: STEP: delete the pod Oct 5 11:06:57.164: INFO: Waiting for pod pod-configmaps-1041ca66-a290-4eea-8611-f32091f7e188 to disappear Oct 5 11:06:57.168: INFO: Pod pod-configmaps-1041ca66-a290-4eea-8611-f32091f7e188 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:06:57.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9533" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":205,"skipped":3465,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:06:57.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:06:57.285: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3921' Oct 5 11:07:00.453: INFO: stderr: "" Oct 5 11:07:00.453: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Oct 5 11:07:00.453: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3921' Oct 5 11:07:03.042: INFO: stderr: "" Oct 5 11:07:03.042: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. Oct 5 11:07:04.051: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 11:07:04.052: INFO: Found 1 / 1 Oct 5 11:07:04.052: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Oct 5 11:07:04.059: INFO: Selector matched 1 pods for map[app:agnhost] Oct 5 11:07:04.059: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Oct 5 11:07:04.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe pod agnhost-primary-gp6tl --namespace=kubectl-3921' Oct 5 11:07:05.342: INFO: stderr: "" Oct 5 11:07:05.343: INFO: stdout: "Name: agnhost-primary-gp6tl\nNamespace: kubectl-3921\nPriority: 0\nNode: kali-worker/172.18.0.12\nStart Time: Mon, 05 Oct 2020 11:07:00 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: \nStatus: Running\nIP: 10.244.2.123\nIPs:\n IP: 10.244.2.123\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://8a58dd1255867e2a1423961476a288390daaa7b550a2e3075f0a1097f83c3eac\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 05 Oct 2020 11:07:03 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-btd72 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-btd72:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-btd72\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-3921/agnhost-primary-gp6tl to kali-worker\n Normal Pulled 4s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.20\" already present on machine\n Normal Created 3s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Oct 5 11:07:05.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe rc agnhost-primary --namespace=kubectl-3921' Oct 5 11:07:06.787: INFO: stderr: "" Oct 5 11:07:06.787: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3921\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.20\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 6s replication-controller Created pod: agnhost-primary-gp6tl\n" Oct 5 11:07:06.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe service agnhost-primary --namespace=kubectl-3921' Oct 5 11:07:08.088: INFO: stderr: "" Oct 5 11:07:08.088: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-3921\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP: 10.98.132.206\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.123:6379\nSession Affinity: None\nEvents: \n" Oct 5 11:07:08.101: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe node kali-control-plane' Oct 5 11:07:09.529: INFO: stderr: "" Oct 5 11:07:09.529: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 23 Sep 2020 08:28:40 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Mon, 05 Oct 2020 11:07:07 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 05 Oct 2020 11:05:55 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 05 Oct 2020 11:05:55 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 05 Oct 2020 11:05:55 +0000 Wed, 23 Sep 2020 08:28:40 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 05 Oct 2020 11:05:55 +0000 Wed, 23 Sep 2020 08:29:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.11\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: f18d6a3b53c14eaca999fce1081671aa\n System UUID: e919c2db-6960-4f78-a4d1-1e39795c20e3\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu Groovy Gorilla (development branch)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0\n Kubelet Version: v1.19.0\n Kube-Proxy Version: v1.19.0\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-6cvzb 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system coredns-f9fd979d6-zzb7k 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 12d\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kindnet-mx6h2 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 12d\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-proxy-x4lnq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 12d\n local-path-storage local-path-provisioner-78776bfc44-sm58q 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Oct 5 11:07:09.533: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config describe namespace kubectl-3921' Oct 5 11:07:10.785: INFO: stderr: "" Oct 5 11:07:10.785: INFO: stdout: "Name: kubectl-3921\nLabels: e2e-framework=kubectl\n e2e-run=c3f5ad12-076c-4084-9d39-b6e5f4f3a3a2\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:07:10.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3921" for this suite. • [SLOW TEST:13.618 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":303,"completed":206,"skipped":3484,"failed":0} [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:07:10.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:07:27.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2908" for this suite. • [SLOW TEST:16.365 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":303,"completed":207,"skipped":3484,"failed":0} SSSS ------------------------------ [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:07:27.170: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [sig-node] PodTemplates /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:07:27.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-4086" for this suite. •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":303,"completed":208,"skipped":3488,"failed":0} ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:07:27.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:161 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:07:27.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-802" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":303,"completed":209,"skipped":3488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:07:27.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-map-ef36b132-94ae-4a31-a62a-eb748325e89a STEP: Creating a pod to test consume secrets Oct 5 11:07:27.734: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a0ab2220-f465-4514-a825-e2ab50256be9" in namespace "projected-8765" to be "Succeeded or Failed" Oct 5 11:07:27.744: INFO: Pod "pod-projected-secrets-a0ab2220-f465-4514-a825-e2ab50256be9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.071584ms Oct 5 11:07:29.904: INFO: Pod "pod-projected-secrets-a0ab2220-f465-4514-a825-e2ab50256be9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170226763s Oct 5 11:07:31.912: INFO: Pod "pod-projected-secrets-a0ab2220-f465-4514-a825-e2ab50256be9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.177836527s STEP: Saw pod success Oct 5 11:07:31.912: INFO: Pod "pod-projected-secrets-a0ab2220-f465-4514-a825-e2ab50256be9" satisfied condition "Succeeded or Failed" Oct 5 11:07:31.962: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-a0ab2220-f465-4514-a825-e2ab50256be9 container projected-secret-volume-test: STEP: delete the pod Oct 5 11:07:32.126: INFO: Waiting for pod pod-projected-secrets-a0ab2220-f465-4514-a825-e2ab50256be9 to disappear Oct 5 11:07:32.138: INFO: Pod pod-projected-secrets-a0ab2220-f465-4514-a825-e2ab50256be9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:07:32.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8765" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":210,"skipped":3513,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:07:32.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:07:49.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3480" for this suite. • [SLOW TEST:17.191 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":303,"completed":211,"skipped":3527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:07:49.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Oct 5 11:08:01.573: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:01.574: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:01.679519 10 log.go:181] (0xaf5e0e0) (0xaf5e1c0) Create stream I1005 11:08:01.679702 10 log.go:181] (0xaf5e0e0) (0xaf5e1c0) Stream added, broadcasting: 1 I1005 11:08:01.684616 10 log.go:181] (0xaf5e0e0) Reply frame received for 1 I1005 11:08:01.684909 10 log.go:181] (0xaf5e0e0) (0x8a84700) Create stream I1005 11:08:01.685020 10 log.go:181] (0xaf5e0e0) (0x8a84700) Stream added, broadcasting: 3 I1005 11:08:01.687191 10 log.go:181] (0xaf5e0e0) Reply frame received for 3 I1005 11:08:01.687453 10 log.go:181] (0xaf5e0e0) (0x8a85c70) Create stream I1005 11:08:01.687576 10 log.go:181] (0xaf5e0e0) (0x8a85c70) Stream added, broadcasting: 5 I1005 11:08:01.689625 10 log.go:181] (0xaf5e0e0) Reply frame received for 5 I1005 11:08:01.758799 10 log.go:181] (0xaf5e0e0) Data frame received for 3 I1005 11:08:01.759026 10 log.go:181] (0x8a84700) (3) Data frame handling I1005 11:08:01.759181 10 log.go:181] (0xaf5e0e0) Data frame received for 5 I1005 11:08:01.759423 10 log.go:181] (0x8a85c70) (5) Data frame handling I1005 11:08:01.759735 10 log.go:181] (0x8a84700) (3) Data frame sent I1005 11:08:01.759965 10 log.go:181] (0xaf5e0e0) Data frame received for 3 I1005 11:08:01.760091 10 log.go:181] (0x8a84700) (3) Data frame handling I1005 11:08:01.760206 10 log.go:181] (0xaf5e0e0) Data frame received for 1 I1005 11:08:01.760326 10 log.go:181] (0xaf5e1c0) (1) Data frame handling I1005 11:08:01.760497 10 log.go:181] (0xaf5e1c0) (1) Data frame sent I1005 11:08:01.760665 10 log.go:181] (0xaf5e0e0) (0xaf5e1c0) Stream removed, broadcasting: 1 I1005 11:08:01.761026 10 log.go:181] (0xaf5e0e0) Go away received I1005 11:08:01.761267 10 log.go:181] (0xaf5e0e0) (0xaf5e1c0) Stream removed, broadcasting: 1 I1005 11:08:01.761367 10 log.go:181] (0xaf5e0e0) (0x8a84700) Stream removed, broadcasting: 3 I1005 11:08:01.761447 10 log.go:181] (0xaf5e0e0) (0x8a85c70) Stream removed, broadcasting: 5 Oct 5 11:08:01.761: INFO: Exec stderr: "" Oct 5 11:08:01.761: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:01.762: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:01.869061 10 log.go:181] (0xaa8eee0) (0xaa8efc0) Create stream I1005 11:08:01.869337 10 log.go:181] (0xaa8eee0) (0xaa8efc0) Stream added, broadcasting: 1 I1005 11:08:01.874751 10 log.go:181] (0xaa8eee0) Reply frame received for 1 I1005 11:08:01.874879 10 log.go:181] (0xaa8eee0) (0xa9a6c40) Create stream I1005 11:08:01.874948 10 log.go:181] (0xaa8eee0) (0xa9a6c40) Stream added, broadcasting: 3 I1005 11:08:01.876204 10 log.go:181] (0xaa8eee0) Reply frame received for 3 I1005 11:08:01.876345 10 log.go:181] (0xaa8eee0) (0xaa8f570) Create stream I1005 11:08:01.876418 10 log.go:181] (0xaa8eee0) (0xaa8f570) Stream added, broadcasting: 5 I1005 11:08:01.877983 10 log.go:181] (0xaa8eee0) Reply frame received for 5 I1005 11:08:01.935242 10 log.go:181] (0xaa8eee0) Data frame received for 3 I1005 11:08:01.935580 10 log.go:181] (0xa9a6c40) (3) Data frame handling I1005 11:08:01.935806 10 log.go:181] (0xa9a6c40) (3) Data frame sent I1005 11:08:01.936158 10 log.go:181] (0xaa8eee0) Data frame received for 3 I1005 11:08:01.936375 10 log.go:181] (0xa9a6c40) (3) Data frame handling I1005 11:08:01.936624 10 log.go:181] (0xaa8eee0) Data frame received for 5 I1005 11:08:01.936820 10 log.go:181] (0xaa8f570) (5) Data frame handling I1005 11:08:01.937523 10 log.go:181] (0xaa8eee0) Data frame received for 1 I1005 11:08:01.937602 10 log.go:181] (0xaa8efc0) (1) Data frame handling I1005 11:08:01.937674 10 log.go:181] (0xaa8efc0) (1) Data frame sent I1005 11:08:01.937759 10 log.go:181] (0xaa8eee0) (0xaa8efc0) Stream removed, broadcasting: 1 I1005 11:08:01.938305 10 log.go:181] (0xaa8eee0) Go away received I1005 11:08:01.938527 10 log.go:181] (0xaa8eee0) (0xaa8efc0) Stream removed, broadcasting: 1 I1005 11:08:01.938698 10 log.go:181] (0xaa8eee0) (0xa9a6c40) Stream removed, broadcasting: 3 I1005 11:08:01.938808 10 log.go:181] (0xaa8eee0) (0xaa8f570) Stream removed, broadcasting: 5 Oct 5 11:08:01.938: INFO: Exec stderr: "" Oct 5 11:08:01.939: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:01.939: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:02.044516 10 log.go:181] (0xaa8fc00) (0xaa8fdc0) Create stream I1005 11:08:02.044660 10 log.go:181] (0xaa8fc00) (0xaa8fdc0) Stream added, broadcasting: 1 I1005 11:08:02.050759 10 log.go:181] (0xaa8fc00) Reply frame received for 1 I1005 11:08:02.051060 10 log.go:181] (0xaa8fc00) (0xaf5f340) Create stream I1005 11:08:02.051223 10 log.go:181] (0xaa8fc00) (0xaf5f340) Stream added, broadcasting: 3 I1005 11:08:02.053042 10 log.go:181] (0xaa8fc00) Reply frame received for 3 I1005 11:08:02.053155 10 log.go:181] (0xaa8fc00) (0xb7f2ee0) Create stream I1005 11:08:02.053221 10 log.go:181] (0xaa8fc00) (0xb7f2ee0) Stream added, broadcasting: 5 I1005 11:08:02.054450 10 log.go:181] (0xaa8fc00) Reply frame received for 5 I1005 11:08:02.102829 10 log.go:181] (0xaa8fc00) Data frame received for 3 I1005 11:08:02.102972 10 log.go:181] (0xaf5f340) (3) Data frame handling I1005 11:08:02.103051 10 log.go:181] (0xaf5f340) (3) Data frame sent I1005 11:08:02.103152 10 log.go:181] (0xaa8fc00) Data frame received for 3 I1005 11:08:02.103222 10 log.go:181] (0xaf5f340) (3) Data frame handling I1005 11:08:02.103371 10 log.go:181] (0xaa8fc00) Data frame received for 5 I1005 11:08:02.103499 10 log.go:181] (0xb7f2ee0) (5) Data frame handling I1005 11:08:02.103842 10 log.go:181] (0xaa8fc00) Data frame received for 1 I1005 11:08:02.103928 10 log.go:181] (0xaa8fdc0) (1) Data frame handling I1005 11:08:02.104028 10 log.go:181] (0xaa8fdc0) (1) Data frame sent I1005 11:08:02.104135 10 log.go:181] (0xaa8fc00) (0xaa8fdc0) Stream removed, broadcasting: 1 I1005 11:08:02.104268 10 log.go:181] (0xaa8fc00) Go away received I1005 11:08:02.104479 10 log.go:181] (0xaa8fc00) (0xaa8fdc0) Stream removed, broadcasting: 1 I1005 11:08:02.104558 10 log.go:181] (0xaa8fc00) (0xaf5f340) Stream removed, broadcasting: 3 I1005 11:08:02.104640 10 log.go:181] (0xaa8fc00) (0xb7f2ee0) Stream removed, broadcasting: 5 Oct 5 11:08:02.104: INFO: Exec stderr: "" Oct 5 11:08:02.104: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:02.104: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:02.204266 10 log.go:181] (0xb7f3500) (0xb7f3570) Create stream I1005 11:08:02.204458 10 log.go:181] (0xb7f3500) (0xb7f3570) Stream added, broadcasting: 1 I1005 11:08:02.208698 10 log.go:181] (0xb7f3500) Reply frame received for 1 I1005 11:08:02.208918 10 log.go:181] (0xb7f3500) (0x9c4aa80) Create stream I1005 11:08:02.209006 10 log.go:181] (0xb7f3500) (0x9c4aa80) Stream added, broadcasting: 3 I1005 11:08:02.210282 10 log.go:181] (0xb7f3500) Reply frame received for 3 I1005 11:08:02.210389 10 log.go:181] (0xb7f3500) (0xb7f38f0) Create stream I1005 11:08:02.210441 10 log.go:181] (0xb7f3500) (0xb7f38f0) Stream added, broadcasting: 5 I1005 11:08:02.211492 10 log.go:181] (0xb7f3500) Reply frame received for 5 I1005 11:08:02.277914 10 log.go:181] (0xb7f3500) Data frame received for 3 I1005 11:08:02.278119 10 log.go:181] (0x9c4aa80) (3) Data frame handling I1005 11:08:02.278357 10 log.go:181] (0xb7f3500) Data frame received for 5 I1005 11:08:02.278551 10 log.go:181] (0xb7f38f0) (5) Data frame handling I1005 11:08:02.278779 10 log.go:181] (0x9c4aa80) (3) Data frame sent I1005 11:08:02.279025 10 log.go:181] (0xb7f3500) Data frame received for 3 I1005 11:08:02.279263 10 log.go:181] (0x9c4aa80) (3) Data frame handling I1005 11:08:02.279703 10 log.go:181] (0xb7f3500) Data frame received for 1 I1005 11:08:02.279898 10 log.go:181] (0xb7f3570) (1) Data frame handling I1005 11:08:02.280033 10 log.go:181] (0xb7f3570) (1) Data frame sent I1005 11:08:02.280164 10 log.go:181] (0xb7f3500) (0xb7f3570) Stream removed, broadcasting: 1 I1005 11:08:02.280364 10 log.go:181] (0xb7f3500) Go away received I1005 11:08:02.280811 10 log.go:181] (0xb7f3500) (0xb7f3570) Stream removed, broadcasting: 1 I1005 11:08:02.281136 10 log.go:181] (0xb7f3500) (0x9c4aa80) Stream removed, broadcasting: 3 I1005 11:08:02.281287 10 log.go:181] (0xb7f3500) (0xb7f38f0) Stream removed, broadcasting: 5 Oct 5 11:08:02.281: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Oct 5 11:08:02.281: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:02.281: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:02.390826 10 log.go:181] (0xaf5fd50) (0xaf5fdc0) Create stream I1005 11:08:02.391018 10 log.go:181] (0xaf5fd50) (0xaf5fdc0) Stream added, broadcasting: 1 I1005 11:08:02.395146 10 log.go:181] (0xaf5fd50) Reply frame received for 1 I1005 11:08:02.395354 10 log.go:181] (0xaf5fd50) (0xb445490) Create stream I1005 11:08:02.395439 10 log.go:181] (0xaf5fd50) (0xb445490) Stream added, broadcasting: 3 I1005 11:08:02.397204 10 log.go:181] (0xaf5fd50) Reply frame received for 3 I1005 11:08:02.397476 10 log.go:181] (0xaf5fd50) (0xb445f10) Create stream I1005 11:08:02.397578 10 log.go:181] (0xaf5fd50) (0xb445f10) Stream added, broadcasting: 5 I1005 11:08:02.399517 10 log.go:181] (0xaf5fd50) Reply frame received for 5 I1005 11:08:02.470469 10 log.go:181] (0xaf5fd50) Data frame received for 5 I1005 11:08:02.470661 10 log.go:181] (0xb445f10) (5) Data frame handling I1005 11:08:02.470846 10 log.go:181] (0xaf5fd50) Data frame received for 3 I1005 11:08:02.471011 10 log.go:181] (0xb445490) (3) Data frame handling I1005 11:08:02.471234 10 log.go:181] (0xb445490) (3) Data frame sent I1005 11:08:02.471416 10 log.go:181] (0xaf5fd50) Data frame received for 3 I1005 11:08:02.471583 10 log.go:181] (0xb445490) (3) Data frame handling I1005 11:08:02.472009 10 log.go:181] (0xaf5fd50) Data frame received for 1 I1005 11:08:02.472128 10 log.go:181] (0xaf5fdc0) (1) Data frame handling I1005 11:08:02.472272 10 log.go:181] (0xaf5fdc0) (1) Data frame sent I1005 11:08:02.472413 10 log.go:181] (0xaf5fd50) (0xaf5fdc0) Stream removed, broadcasting: 1 I1005 11:08:02.472576 10 log.go:181] (0xaf5fd50) Go away received I1005 11:08:02.473536 10 log.go:181] (0xaf5fd50) (0xaf5fdc0) Stream removed, broadcasting: 1 I1005 11:08:02.473665 10 log.go:181] (0xaf5fd50) (0xb445490) Stream removed, broadcasting: 3 I1005 11:08:02.473755 10 log.go:181] (0xaf5fd50) (0xb445f10) Stream removed, broadcasting: 5 Oct 5 11:08:02.473: INFO: Exec stderr: "" Oct 5 11:08:02.473: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:02.474: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:02.578305 10 log.go:181] (0xa54f0a0) (0xa54f110) Create stream I1005 11:08:02.578493 10 log.go:181] (0xa54f0a0) (0xa54f110) Stream added, broadcasting: 1 I1005 11:08:02.582634 10 log.go:181] (0xa54f0a0) Reply frame received for 1 I1005 11:08:02.582799 10 log.go:181] (0xa54f0a0) (0x9c4b180) Create stream I1005 11:08:02.582874 10 log.go:181] (0xa54f0a0) (0x9c4b180) Stream added, broadcasting: 3 I1005 11:08:02.584159 10 log.go:181] (0xa54f0a0) Reply frame received for 3 I1005 11:08:02.584296 10 log.go:181] (0xa54f0a0) (0x9c4b500) Create stream I1005 11:08:02.584373 10 log.go:181] (0xa54f0a0) (0x9c4b500) Stream added, broadcasting: 5 I1005 11:08:02.585765 10 log.go:181] (0xa54f0a0) Reply frame received for 5 I1005 11:08:02.645054 10 log.go:181] (0xa54f0a0) Data frame received for 3 I1005 11:08:02.645246 10 log.go:181] (0x9c4b180) (3) Data frame handling I1005 11:08:02.645384 10 log.go:181] (0x9c4b180) (3) Data frame sent I1005 11:08:02.645494 10 log.go:181] (0xa54f0a0) Data frame received for 3 I1005 11:08:02.645604 10 log.go:181] (0xa54f0a0) Data frame received for 5 I1005 11:08:02.645791 10 log.go:181] (0x9c4b500) (5) Data frame handling I1005 11:08:02.645994 10 log.go:181] (0x9c4b180) (3) Data frame handling I1005 11:08:02.646681 10 log.go:181] (0xa54f0a0) Data frame received for 1 I1005 11:08:02.646861 10 log.go:181] (0xa54f110) (1) Data frame handling I1005 11:08:02.647019 10 log.go:181] (0xa54f110) (1) Data frame sent I1005 11:08:02.647184 10 log.go:181] (0xa54f0a0) (0xa54f110) Stream removed, broadcasting: 1 I1005 11:08:02.647436 10 log.go:181] (0xa54f0a0) Go away received I1005 11:08:02.647913 10 log.go:181] (0xa54f0a0) (0xa54f110) Stream removed, broadcasting: 1 I1005 11:08:02.648087 10 log.go:181] (0xa54f0a0) (0x9c4b180) Stream removed, broadcasting: 3 I1005 11:08:02.648257 10 log.go:181] (0xa54f0a0) (0x9c4b500) Stream removed, broadcasting: 5 Oct 5 11:08:02.648: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Oct 5 11:08:02.648: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:02.648: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:02.778106 10 log.go:181] (0x967ce00) (0x967ce70) Create stream I1005 11:08:02.778234 10 log.go:181] (0x967ce00) (0x967ce70) Stream added, broadcasting: 1 I1005 11:08:02.783681 10 log.go:181] (0x967ce00) Reply frame received for 1 I1005 11:08:02.783911 10 log.go:181] (0x967ce00) (0xa9a75e0) Create stream I1005 11:08:02.784044 10 log.go:181] (0x967ce00) (0xa9a75e0) Stream added, broadcasting: 3 I1005 11:08:02.785711 10 log.go:181] (0x967ce00) Reply frame received for 3 I1005 11:08:02.785845 10 log.go:181] (0x967ce00) (0x967d030) Create stream I1005 11:08:02.785918 10 log.go:181] (0x967ce00) (0x967d030) Stream added, broadcasting: 5 I1005 11:08:02.787187 10 log.go:181] (0x967ce00) Reply frame received for 5 I1005 11:08:02.851374 10 log.go:181] (0x967ce00) Data frame received for 5 I1005 11:08:02.851542 10 log.go:181] (0x967d030) (5) Data frame handling I1005 11:08:02.851646 10 log.go:181] (0x967ce00) Data frame received for 3 I1005 11:08:02.851742 10 log.go:181] (0xa9a75e0) (3) Data frame handling I1005 11:08:02.851872 10 log.go:181] (0xa9a75e0) (3) Data frame sent I1005 11:08:02.851971 10 log.go:181] (0x967ce00) Data frame received for 3 I1005 11:08:02.852093 10 log.go:181] (0xa9a75e0) (3) Data frame handling I1005 11:08:02.852225 10 log.go:181] (0x967ce00) Data frame received for 1 I1005 11:08:02.852314 10 log.go:181] (0x967ce70) (1) Data frame handling I1005 11:08:02.852395 10 log.go:181] (0x967ce70) (1) Data frame sent I1005 11:08:02.852500 10 log.go:181] (0x967ce00) (0x967ce70) Stream removed, broadcasting: 1 I1005 11:08:02.852604 10 log.go:181] (0x967ce00) Go away received I1005 11:08:02.853012 10 log.go:181] (0x967ce00) (0x967ce70) Stream removed, broadcasting: 1 I1005 11:08:02.853132 10 log.go:181] (0x967ce00) (0xa9a75e0) Stream removed, broadcasting: 3 I1005 11:08:02.853208 10 log.go:181] (0x967ce00) (0x967d030) Stream removed, broadcasting: 5 Oct 5 11:08:02.853: INFO: Exec stderr: "" Oct 5 11:08:02.853: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:02.853: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:02.952518 10 log.go:181] (0x9c4bf10) (0xb0a2000) Create stream I1005 11:08:02.952649 10 log.go:181] (0x9c4bf10) (0xb0a2000) Stream added, broadcasting: 1 I1005 11:08:02.955707 10 log.go:181] (0x9c4bf10) Reply frame received for 1 I1005 11:08:02.955833 10 log.go:181] (0x9c4bf10) (0xb7f3f80) Create stream I1005 11:08:02.955892 10 log.go:181] (0x9c4bf10) (0xb7f3f80) Stream added, broadcasting: 3 I1005 11:08:02.957133 10 log.go:181] (0x9c4bf10) Reply frame received for 3 I1005 11:08:02.957266 10 log.go:181] (0x9c4bf10) (0xb0a2230) Create stream I1005 11:08:02.957323 10 log.go:181] (0x9c4bf10) (0xb0a2230) Stream added, broadcasting: 5 I1005 11:08:02.958820 10 log.go:181] (0x9c4bf10) Reply frame received for 5 I1005 11:08:03.024626 10 log.go:181] (0x9c4bf10) Data frame received for 5 I1005 11:08:03.024951 10 log.go:181] (0xb0a2230) (5) Data frame handling I1005 11:08:03.025191 10 log.go:181] (0x9c4bf10) Data frame received for 3 I1005 11:08:03.025433 10 log.go:181] (0xb7f3f80) (3) Data frame handling I1005 11:08:03.025619 10 log.go:181] (0xb7f3f80) (3) Data frame sent I1005 11:08:03.025801 10 log.go:181] (0x9c4bf10) Data frame received for 3 I1005 11:08:03.026018 10 log.go:181] (0xb7f3f80) (3) Data frame handling I1005 11:08:03.026173 10 log.go:181] (0x9c4bf10) Data frame received for 1 I1005 11:08:03.026359 10 log.go:181] (0xb0a2000) (1) Data frame handling I1005 11:08:03.026550 10 log.go:181] (0xb0a2000) (1) Data frame sent I1005 11:08:03.026738 10 log.go:181] (0x9c4bf10) (0xb0a2000) Stream removed, broadcasting: 1 I1005 11:08:03.026978 10 log.go:181] (0x9c4bf10) Go away received I1005 11:08:03.027498 10 log.go:181] (0x9c4bf10) (0xb0a2000) Stream removed, broadcasting: 1 I1005 11:08:03.027702 10 log.go:181] (0x9c4bf10) (0xb7f3f80) Stream removed, broadcasting: 3 I1005 11:08:03.027900 10 log.go:181] (0x9c4bf10) (0xb0a2230) Stream removed, broadcasting: 5 Oct 5 11:08:03.028: INFO: Exec stderr: "" Oct 5 11:08:03.028: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:03.028: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:03.135228 10 log.go:181] (0xb0a27e0) (0xb0a2850) Create stream I1005 11:08:03.135374 10 log.go:181] (0xb0a27e0) (0xb0a2850) Stream added, broadcasting: 1 I1005 11:08:03.139688 10 log.go:181] (0xb0a27e0) Reply frame received for 1 I1005 11:08:03.139930 10 log.go:181] (0xb0a27e0) (0xa9a7a40) Create stream I1005 11:08:03.140050 10 log.go:181] (0xb0a27e0) (0xa9a7a40) Stream added, broadcasting: 3 I1005 11:08:03.141969 10 log.go:181] (0xb0a27e0) Reply frame received for 3 I1005 11:08:03.142127 10 log.go:181] (0xb0a27e0) (0xa54fc70) Create stream I1005 11:08:03.142216 10 log.go:181] (0xb0a27e0) (0xa54fc70) Stream added, broadcasting: 5 I1005 11:08:03.143478 10 log.go:181] (0xb0a27e0) Reply frame received for 5 I1005 11:08:03.199879 10 log.go:181] (0xb0a27e0) Data frame received for 3 I1005 11:08:03.200160 10 log.go:181] (0xa9a7a40) (3) Data frame handling I1005 11:08:03.200331 10 log.go:181] (0xb0a27e0) Data frame received for 5 I1005 11:08:03.200488 10 log.go:181] (0xa54fc70) (5) Data frame handling I1005 11:08:03.200596 10 log.go:181] (0xa9a7a40) (3) Data frame sent I1005 11:08:03.200754 10 log.go:181] (0xb0a27e0) Data frame received for 3 I1005 11:08:03.200947 10 log.go:181] (0xa9a7a40) (3) Data frame handling I1005 11:08:03.201221 10 log.go:181] (0xb0a27e0) Data frame received for 1 I1005 11:08:03.201299 10 log.go:181] (0xb0a2850) (1) Data frame handling I1005 11:08:03.201372 10 log.go:181] (0xb0a2850) (1) Data frame sent I1005 11:08:03.201454 10 log.go:181] (0xb0a27e0) (0xb0a2850) Stream removed, broadcasting: 1 I1005 11:08:03.201546 10 log.go:181] (0xb0a27e0) Go away received I1005 11:08:03.201863 10 log.go:181] (0xb0a27e0) (0xb0a2850) Stream removed, broadcasting: 1 I1005 11:08:03.201960 10 log.go:181] (0xb0a27e0) (0xa9a7a40) Stream removed, broadcasting: 3 I1005 11:08:03.202038 10 log.go:181] (0xb0a27e0) (0xa54fc70) Stream removed, broadcasting: 5 Oct 5 11:08:03.202: INFO: Exec stderr: "" Oct 5 11:08:03.202: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9245 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:03.202: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:03.300191 10 log.go:181] (0x8759030) (0x8759180) Create stream I1005 11:08:03.300345 10 log.go:181] (0x8759030) (0x8759180) Stream added, broadcasting: 1 I1005 11:08:03.304994 10 log.go:181] (0x8759030) Reply frame received for 1 I1005 11:08:03.305231 10 log.go:181] (0x8759030) (0xa9a7f80) Create stream I1005 11:08:03.305339 10 log.go:181] (0x8759030) (0xa9a7f80) Stream added, broadcasting: 3 I1005 11:08:03.307192 10 log.go:181] (0x8759030) Reply frame received for 3 I1005 11:08:03.307349 10 log.go:181] (0x8759030) (0x8759960) Create stream I1005 11:08:03.307423 10 log.go:181] (0x8759030) (0x8759960) Stream added, broadcasting: 5 I1005 11:08:03.309092 10 log.go:181] (0x8759030) Reply frame received for 5 I1005 11:08:03.367256 10 log.go:181] (0x8759030) Data frame received for 3 I1005 11:08:03.367435 10 log.go:181] (0xa9a7f80) (3) Data frame handling I1005 11:08:03.367540 10 log.go:181] (0xa9a7f80) (3) Data frame sent I1005 11:08:03.367633 10 log.go:181] (0x8759030) Data frame received for 3 I1005 11:08:03.367730 10 log.go:181] (0xa9a7f80) (3) Data frame handling I1005 11:08:03.367828 10 log.go:181] (0x8759030) Data frame received for 5 I1005 11:08:03.367944 10 log.go:181] (0x8759960) (5) Data frame handling I1005 11:08:03.368054 10 log.go:181] (0x8759030) Data frame received for 1 I1005 11:08:03.368149 10 log.go:181] (0x8759180) (1) Data frame handling I1005 11:08:03.368243 10 log.go:181] (0x8759180) (1) Data frame sent I1005 11:08:03.368340 10 log.go:181] (0x8759030) (0x8759180) Stream removed, broadcasting: 1 I1005 11:08:03.368441 10 log.go:181] (0x8759030) Go away received I1005 11:08:03.368759 10 log.go:181] (0x8759030) (0x8759180) Stream removed, broadcasting: 1 I1005 11:08:03.368936 10 log.go:181] (0x8759030) (0xa9a7f80) Stream removed, broadcasting: 3 I1005 11:08:03.369013 10 log.go:181] (0x8759030) (0x8759960) Stream removed, broadcasting: 5 Oct 5 11:08:03.369: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:08:03.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-9245" for this suite. • [SLOW TEST:14.037 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":212,"skipped":3558,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:08:03.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:08:09.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5871" for this suite. STEP: Destroying namespace "nsdeletetest-6834" for this suite. Oct 5 11:08:09.765: INFO: Namespace nsdeletetest-6834 was already deleted STEP: Destroying namespace "nsdeletetest-5246" for this suite. • [SLOW TEST:6.386 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":303,"completed":213,"skipped":3568,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] server version should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:08:09.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Request ServerVersion STEP: Confirm major version Oct 5 11:08:09.837: INFO: Major version: 1 STEP: Confirm minor version Oct 5 11:08:09.837: INFO: cleanMinorVersion: 19 Oct 5 11:08:09.838: INFO: Minor version: 19 [AfterEach] [sig-api-machinery] server version /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:08:09.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-568" for this suite. •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":303,"completed":214,"skipped":3585,"failed":0} SS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:08:09.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5991.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5991.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 11:08:16.014: INFO: DNS probes using dns-5991/dns-test-61c6b5af-49a6-4e75-a177-5ead2fe033f7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:08:16.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5991" for this suite. • [SLOW TEST:6.236 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":303,"completed":215,"skipped":3587,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:08:16.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on tmpfs Oct 5 11:08:16.159: INFO: Waiting up to 5m0s for pod "pod-03d2c8bb-e8f2-419c-aed1-c909d5dfc977" in namespace "emptydir-3092" to be "Succeeded or Failed" Oct 5 11:08:16.516: INFO: Pod "pod-03d2c8bb-e8f2-419c-aed1-c909d5dfc977": Phase="Pending", Reason="", readiness=false. Elapsed: 356.123953ms Oct 5 11:08:18.743: INFO: Pod "pod-03d2c8bb-e8f2-419c-aed1-c909d5dfc977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.583775096s Oct 5 11:08:20.750: INFO: Pod "pod-03d2c8bb-e8f2-419c-aed1-c909d5dfc977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.590480639s STEP: Saw pod success Oct 5 11:08:20.750: INFO: Pod "pod-03d2c8bb-e8f2-419c-aed1-c909d5dfc977" satisfied condition "Succeeded or Failed" Oct 5 11:08:20.762: INFO: Trying to get logs from node kali-worker2 pod pod-03d2c8bb-e8f2-419c-aed1-c909d5dfc977 container test-container: STEP: delete the pod Oct 5 11:08:20.810: INFO: Waiting for pod pod-03d2c8bb-e8f2-419c-aed1-c909d5dfc977 to disappear Oct 5 11:08:20.816: INFO: Pod pod-03d2c8bb-e8f2-419c-aed1-c909d5dfc977 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:08:20.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3092" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":216,"skipped":3599,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:08:20.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-d248d6d0-d64a-4e14-a4a9-d5e6f27da260 STEP: Creating a pod to test consume secrets Oct 5 11:08:21.183: INFO: Waiting up to 5m0s for pod "pod-secrets-65f74fb8-6e63-40ff-af90-60f62b482185" in namespace "secrets-2954" to be "Succeeded or Failed" Oct 5 11:08:21.212: INFO: Pod "pod-secrets-65f74fb8-6e63-40ff-af90-60f62b482185": Phase="Pending", Reason="", readiness=false. Elapsed: 28.180764ms Oct 5 11:08:23.220: INFO: Pod "pod-secrets-65f74fb8-6e63-40ff-af90-60f62b482185": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036472567s Oct 5 11:08:25.227: INFO: Pod "pod-secrets-65f74fb8-6e63-40ff-af90-60f62b482185": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043134803s STEP: Saw pod success Oct 5 11:08:25.227: INFO: Pod "pod-secrets-65f74fb8-6e63-40ff-af90-60f62b482185" satisfied condition "Succeeded or Failed" Oct 5 11:08:25.231: INFO: Trying to get logs from node kali-worker pod pod-secrets-65f74fb8-6e63-40ff-af90-60f62b482185 container secret-env-test: STEP: delete the pod Oct 5 11:08:25.380: INFO: Waiting for pod pod-secrets-65f74fb8-6e63-40ff-af90-60f62b482185 to disappear Oct 5 11:08:25.409: INFO: Pod pod-secrets-65f74fb8-6e63-40ff-af90-60f62b482185 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:08:25.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2954" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":303,"completed":217,"skipped":3622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:08:25.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Oct 5 11:08:31.690: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-3290 PodName:pod-sharedvolume-0168190e-a8f6-4d4e-adb7-d22c84cc6983 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:08:31.690: INFO: >>> kubeConfig: /root/.kube/config I1005 11:08:31.797500 10 log.go:181] (0x852a850) (0x852ab60) Create stream I1005 11:08:31.797635 10 log.go:181] (0x852a850) (0x852ab60) Stream added, broadcasting: 1 I1005 11:08:31.801549 10 log.go:181] (0x852a850) Reply frame received for 1 I1005 11:08:31.801725 10 log.go:181] (0x852a850) (0x7449880) Create stream I1005 11:08:31.801811 10 log.go:181] (0x852a850) (0x7449880) Stream added, broadcasting: 3 I1005 11:08:31.803309 10 log.go:181] (0x852a850) Reply frame received for 3 I1005 11:08:31.803496 10 log.go:181] (0x852a850) (0x6f9c4d0) Create stream I1005 11:08:31.803586 10 log.go:181] (0x852a850) (0x6f9c4d0) Stream added, broadcasting: 5 I1005 11:08:31.804951 10 log.go:181] (0x852a850) Reply frame received for 5 I1005 11:08:31.889695 10 log.go:181] (0x852a850) Data frame received for 5 I1005 11:08:31.889881 10 log.go:181] (0x6f9c4d0) (5) Data frame handling I1005 11:08:31.890062 10 log.go:181] (0x852a850) Data frame received for 3 I1005 11:08:31.890231 10 log.go:181] (0x7449880) (3) Data frame handling I1005 11:08:31.890361 10 log.go:181] (0x7449880) (3) Data frame sent I1005 11:08:31.890469 10 log.go:181] (0x852a850) Data frame received for 3 I1005 11:08:31.890604 10 log.go:181] (0x7449880) (3) Data frame handling I1005 11:08:31.891625 10 log.go:181] (0x852a850) Data frame received for 1 I1005 11:08:31.891727 10 log.go:181] (0x852ab60) (1) Data frame handling I1005 11:08:31.891833 10 log.go:181] (0x852ab60) (1) Data frame sent I1005 11:08:31.891978 10 log.go:181] (0x852a850) (0x852ab60) Stream removed, broadcasting: 1 I1005 11:08:31.892115 10 log.go:181] (0x852a850) Go away received I1005 11:08:31.892424 10 log.go:181] (0x852a850) (0x852ab60) Stream removed, broadcasting: 1 I1005 11:08:31.892543 10 log.go:181] (0x852a850) (0x7449880) Stream removed, broadcasting: 3 I1005 11:08:31.892635 10 log.go:181] (0x852a850) (0x6f9c4d0) Stream removed, broadcasting: 5 Oct 5 11:08:31.892: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:08:31.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3290" for this suite. • [SLOW TEST:6.374 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":303,"completed":218,"skipped":3671,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:08:31.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 11:08:32.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b49a629f-798d-464c-845a-572c12fee12e" in namespace "downward-api-2438" to be "Succeeded or Failed" Oct 5 11:08:32.021: INFO: Pod "downwardapi-volume-b49a629f-798d-464c-845a-572c12fee12e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.853561ms Oct 5 11:08:34.078: INFO: Pod "downwardapi-volume-b49a629f-798d-464c-845a-572c12fee12e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063550017s Oct 5 11:08:36.086: INFO: Pod "downwardapi-volume-b49a629f-798d-464c-845a-572c12fee12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071209088s STEP: Saw pod success Oct 5 11:08:36.086: INFO: Pod "downwardapi-volume-b49a629f-798d-464c-845a-572c12fee12e" satisfied condition "Succeeded or Failed" Oct 5 11:08:36.090: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b49a629f-798d-464c-845a-572c12fee12e container client-container: STEP: delete the pod Oct 5 11:08:36.241: INFO: Waiting for pod downwardapi-volume-b49a629f-798d-464c-845a-572c12fee12e to disappear Oct 5 11:08:36.271: INFO: Pod downwardapi-volume-b49a629f-798d-464c-845a-572c12fee12e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:08:36.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2438" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":303,"completed":219,"skipped":3675,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:08:36.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 5 11:08:36.439: INFO: Waiting up to 1m0s for all nodes to be ready Oct 5 11:09:36.520: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:09:36.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption-path STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:487 STEP: Finding an available node STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. Oct 5 11:09:40.645: INFO: found a healthy node: kali-worker2 [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:09:55.087: INFO: pods created so far: [1 1 1] Oct 5 11:09:55.088: INFO: length of pods created so far: 3 Oct 5 11:10:13.104: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:10:20.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-path-1892" for this suite. [AfterEach] PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:461 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:10:20.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-6478" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:104.080 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 PreemptionExecutionPath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:450 runs ReplicaSets to verify preemption running path [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":303,"completed":220,"skipped":3739,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:10:20.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 11:10:37.515: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 11:10:39.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493037, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493037, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493037, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493037, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:10:41.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493037, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493037, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493037, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493037, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 11:10:44.600: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:10:44.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1872" for this suite. STEP: Destroying namespace "webhook-1872-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.552 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":303,"completed":221,"skipped":3751,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:10:44.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name s-test-opt-del-36081c73-bc91-4295-92c6-637ec433a03a STEP: Creating secret with name s-test-opt-upd-362816ee-698e-49e6-a34b-c5ae39862240 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-36081c73-bc91-4295-92c6-637ec433a03a STEP: Updating secret s-test-opt-upd-362816ee-698e-49e6-a34b-c5ae39862240 STEP: Creating secret with name s-test-opt-create-d0de67d6-6495-4f63-b753-9c4f57df45b8 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:10:53.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7483" for this suite. • [SLOW TEST:8.294 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":303,"completed":222,"skipped":3769,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:10:53.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Oct 5 11:10:57.411: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:10:57.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9553" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":303,"completed":223,"skipped":3770,"failed":0} SSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:10:57.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token STEP: reading a file in the container Oct 5 11:11:02.180: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-873 pod-service-account-f2f89442-696b-4feb-9288-216324bcc2df -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Oct 5 11:11:03.780: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-873 pod-service-account-f2f89442-696b-4feb-9288-216324bcc2df -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Oct 5 11:11:05.337: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-873 pod-service-account-f2f89442-696b-4feb-9288-216324bcc2df -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:11:06.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-873" for this suite. • [SLOW TEST:9.346 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":303,"completed":224,"skipped":3781,"failed":0} [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:11:06.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 11:11:06.980: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a" in namespace "downward-api-9888" to be "Succeeded or Failed" Oct 5 11:11:06.988: INFO: Pod "downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.975231ms Oct 5 11:11:08.997: INFO: Pod "downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016747522s Oct 5 11:11:11.004: INFO: Pod "downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024148327s Oct 5 11:11:13.018: INFO: Pod "downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038365385s STEP: Saw pod success Oct 5 11:11:13.018: INFO: Pod "downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a" satisfied condition "Succeeded or Failed" Oct 5 11:11:13.032: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a container client-container: STEP: delete the pod Oct 5 11:11:13.095: INFO: Waiting for pod downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a to disappear Oct 5 11:11:13.100: INFO: Pod downwardapi-volume-a8460d2d-4fe3-4542-9c49-33254a38d29a no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:11:13.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9888" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":225,"skipped":3781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:11:13.114: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap that has name configmap-test-emptyKey-9ec81fe2-4bd2-41fa-8894-9c3e005041b9 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:11:13.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2474" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":303,"completed":226,"skipped":3804,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:11:13.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:11:13.247: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2873 I1005 11:11:13.309360 10 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2873, replica count: 1 I1005 11:11:14.360403 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:11:15.361304 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:11:16.362473 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:11:17.363354 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:11:18.364058 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:11:19.365299 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:11:20.366005 10 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 11:11:20.510: INFO: Created: latency-svc-gngmk Oct 5 11:11:20.543: INFO: Got endpoints: latency-svc-gngmk [75.211704ms] Oct 5 11:11:20.615: INFO: Created: latency-svc-vw98v Oct 5 11:11:20.631: INFO: Got endpoints: latency-svc-vw98v [87.050793ms] Oct 5 11:11:20.660: INFO: Created: latency-svc-l9xp8 Oct 5 11:11:20.727: INFO: Got endpoints: latency-svc-l9xp8 [182.728478ms] Oct 5 11:11:20.733: INFO: Created: latency-svc-m25mp Oct 5 11:11:20.749: INFO: Got endpoints: latency-svc-m25mp [204.621423ms] Oct 5 11:11:20.786: INFO: Created: latency-svc-s2v6c Oct 5 11:11:20.798: INFO: Got endpoints: latency-svc-s2v6c [253.27468ms] Oct 5 11:11:20.864: INFO: Created: latency-svc-98dls Oct 5 11:11:20.879: INFO: Got endpoints: latency-svc-98dls [333.274697ms] Oct 5 11:11:20.906: INFO: Created: latency-svc-46w4h Oct 5 11:11:20.930: INFO: Got endpoints: latency-svc-46w4h [384.913814ms] Oct 5 11:11:20.954: INFO: Created: latency-svc-4kfb2 Oct 5 11:11:21.008: INFO: Got endpoints: latency-svc-4kfb2 [462.350414ms] Oct 5 11:11:21.011: INFO: Created: latency-svc-q55wh Oct 5 11:11:21.052: INFO: Got endpoints: latency-svc-q55wh [508.13115ms] Oct 5 11:11:21.107: INFO: Created: latency-svc-qn944 Oct 5 11:11:21.139: INFO: Got endpoints: latency-svc-qn944 [594.323826ms] Oct 5 11:11:21.151: INFO: Created: latency-svc-zssfj Oct 5 11:11:21.176: INFO: Got endpoints: latency-svc-zssfj [630.68009ms] Oct 5 11:11:21.199: INFO: Created: latency-svc-mq8vg Oct 5 11:11:21.208: INFO: Got endpoints: latency-svc-mq8vg [663.585365ms] Oct 5 11:11:21.228: INFO: Created: latency-svc-wjrfc Oct 5 11:11:21.279: INFO: Got endpoints: latency-svc-wjrfc [732.361336ms] Oct 5 11:11:21.299: INFO: Created: latency-svc-ddh2t Oct 5 11:11:21.311: INFO: Got endpoints: latency-svc-ddh2t [766.190329ms] Oct 5 11:11:21.327: INFO: Created: latency-svc-bhsjv Oct 5 11:11:21.357: INFO: Got endpoints: latency-svc-bhsjv [810.9001ms] Oct 5 11:11:21.428: INFO: Created: latency-svc-w22rz Oct 5 11:11:21.478: INFO: Got endpoints: latency-svc-w22rz [932.807465ms] Oct 5 11:11:21.502: INFO: Created: latency-svc-c9wcb Oct 5 11:11:21.515: INFO: Got endpoints: latency-svc-c9wcb [883.99984ms] Oct 5 11:11:21.560: INFO: Created: latency-svc-gtntl Oct 5 11:11:21.576: INFO: Got endpoints: latency-svc-gtntl [849.043732ms] Oct 5 11:11:21.614: INFO: Created: latency-svc-m72jm Oct 5 11:11:21.623: INFO: Got endpoints: latency-svc-m72jm [874.258082ms] Oct 5 11:11:21.643: INFO: Created: latency-svc-jbh75 Oct 5 11:11:21.698: INFO: Got endpoints: latency-svc-jbh75 [899.241271ms] Oct 5 11:11:21.698: INFO: Created: latency-svc-9twjn Oct 5 11:11:21.714: INFO: Got endpoints: latency-svc-9twjn [835.316629ms] Oct 5 11:11:21.748: INFO: Created: latency-svc-kw9sv Oct 5 11:11:21.781: INFO: Got endpoints: latency-svc-kw9sv [850.704252ms] Oct 5 11:11:21.846: INFO: Created: latency-svc-2vqg2 Oct 5 11:11:21.881: INFO: Got endpoints: latency-svc-2vqg2 [873.045257ms] Oct 5 11:11:21.911: INFO: Created: latency-svc-hmf8h Oct 5 11:11:21.938: INFO: Got endpoints: latency-svc-hmf8h [886.030035ms] Oct 5 11:11:22.001: INFO: Created: latency-svc-wh6pq Oct 5 11:11:22.015: INFO: Got endpoints: latency-svc-wh6pq [875.94655ms] Oct 5 11:11:22.049: INFO: Created: latency-svc-mt9gp Oct 5 11:11:22.064: INFO: Got endpoints: latency-svc-mt9gp [887.669879ms] Oct 5 11:11:22.153: INFO: Created: latency-svc-vjkwk Oct 5 11:11:22.173: INFO: Got endpoints: latency-svc-vjkwk [964.846198ms] Oct 5 11:11:22.188: INFO: Created: latency-svc-n2xnw Oct 5 11:11:22.203: INFO: Got endpoints: latency-svc-n2xnw [924.16402ms] Oct 5 11:11:22.228: INFO: Created: latency-svc-k4cj9 Oct 5 11:11:22.308: INFO: Created: latency-svc-rvsn4 Oct 5 11:11:22.309: INFO: Got endpoints: latency-svc-k4cj9 [997.342036ms] Oct 5 11:11:22.312: INFO: Got endpoints: latency-svc-rvsn4 [955.210914ms] Oct 5 11:11:22.369: INFO: Created: latency-svc-mvwln Oct 5 11:11:22.391: INFO: Got endpoints: latency-svc-mvwln [913.083968ms] Oct 5 11:11:22.405: INFO: Created: latency-svc-8m4xj Oct 5 11:11:22.456: INFO: Got endpoints: latency-svc-8m4xj [940.379585ms] Oct 5 11:11:22.481: INFO: Created: latency-svc-8mbst Oct 5 11:11:22.492: INFO: Got endpoints: latency-svc-8mbst [915.047692ms] Oct 5 11:11:22.510: INFO: Created: latency-svc-wh6td Oct 5 11:11:22.523: INFO: Got endpoints: latency-svc-wh6td [899.470906ms] Oct 5 11:11:22.543: INFO: Created: latency-svc-mtqjh Oct 5 11:11:22.626: INFO: Got endpoints: latency-svc-mtqjh [928.010898ms] Oct 5 11:11:22.628: INFO: Created: latency-svc-hxff6 Oct 5 11:11:22.642: INFO: Got endpoints: latency-svc-hxff6 [928.01033ms] Oct 5 11:11:22.684: INFO: Created: latency-svc-f2tlw Oct 5 11:11:22.697: INFO: Got endpoints: latency-svc-f2tlw [915.370003ms] Oct 5 11:11:22.714: INFO: Created: latency-svc-p8f47 Oct 5 11:11:22.782: INFO: Got endpoints: latency-svc-p8f47 [900.938724ms] Oct 5 11:11:22.783: INFO: Created: latency-svc-w4vvj Oct 5 11:11:22.793: INFO: Got endpoints: latency-svc-w4vvj [854.158622ms] Oct 5 11:11:22.814: INFO: Created: latency-svc-wfvcp Oct 5 11:11:22.824: INFO: Got endpoints: latency-svc-wfvcp [808.389647ms] Oct 5 11:11:22.843: INFO: Created: latency-svc-7m8f9 Oct 5 11:11:22.854: INFO: Got endpoints: latency-svc-7m8f9 [789.826205ms] Oct 5 11:11:22.946: INFO: Created: latency-svc-r9xz7 Oct 5 11:11:22.987: INFO: Created: latency-svc-cs9s8 Oct 5 11:11:22.987: INFO: Got endpoints: latency-svc-r9xz7 [814.439436ms] Oct 5 11:11:23.031: INFO: Got endpoints: latency-svc-cs9s8 [826.923532ms] Oct 5 11:11:23.121: INFO: Created: latency-svc-jbfm4 Oct 5 11:11:23.127: INFO: Got endpoints: latency-svc-jbfm4 [818.671403ms] Oct 5 11:11:23.173: INFO: Created: latency-svc-9bfx7 Oct 5 11:11:23.184: INFO: Got endpoints: latency-svc-9bfx7 [871.656198ms] Oct 5 11:11:23.203: INFO: Created: latency-svc-7jtnb Oct 5 11:11:23.216: INFO: Got endpoints: latency-svc-7jtnb [824.806487ms] Oct 5 11:11:23.265: INFO: Created: latency-svc-t5vt4 Oct 5 11:11:23.275: INFO: Got endpoints: latency-svc-t5vt4 [819.150726ms] Oct 5 11:11:23.297: INFO: Created: latency-svc-wnn4s Oct 5 11:11:23.312: INFO: Got endpoints: latency-svc-wnn4s [820.29952ms] Oct 5 11:11:23.332: INFO: Created: latency-svc-l24qg Oct 5 11:11:23.348: INFO: Got endpoints: latency-svc-l24qg [825.030042ms] Oct 5 11:11:23.397: INFO: Created: latency-svc-f8s5x Oct 5 11:11:23.401: INFO: Got endpoints: latency-svc-f8s5x [774.695252ms] Oct 5 11:11:23.425: INFO: Created: latency-svc-g6q68 Oct 5 11:11:23.441: INFO: Got endpoints: latency-svc-g6q68 [797.89977ms] Oct 5 11:11:23.476: INFO: Created: latency-svc-7vt7f Oct 5 11:11:23.487: INFO: Got endpoints: latency-svc-7vt7f [789.491075ms] Oct 5 11:11:23.559: INFO: Created: latency-svc-8lhtc Oct 5 11:11:23.577: INFO: Got endpoints: latency-svc-8lhtc [794.854758ms] Oct 5 11:11:23.596: INFO: Created: latency-svc-4htm6 Oct 5 11:11:23.639: INFO: Got endpoints: latency-svc-4htm6 [845.981473ms] Oct 5 11:11:23.703: INFO: Created: latency-svc-dz4cz Oct 5 11:11:23.708: INFO: Got endpoints: latency-svc-dz4cz [883.958485ms] Oct 5 11:11:23.737: INFO: Created: latency-svc-5bcz7 Oct 5 11:11:23.766: INFO: Got endpoints: latency-svc-5bcz7 [911.42321ms] Oct 5 11:11:23.846: INFO: Created: latency-svc-fkjfn Oct 5 11:11:23.850: INFO: Got endpoints: latency-svc-fkjfn [862.256982ms] Oct 5 11:11:23.888: INFO: Created: latency-svc-fl2s6 Oct 5 11:11:23.915: INFO: Got endpoints: latency-svc-fl2s6 [884.592995ms] Oct 5 11:11:23.990: INFO: Created: latency-svc-6z998 Oct 5 11:11:24.025: INFO: Got endpoints: latency-svc-6z998 [897.477959ms] Oct 5 11:11:24.027: INFO: Created: latency-svc-rzgf4 Oct 5 11:11:24.040: INFO: Got endpoints: latency-svc-rzgf4 [855.455844ms] Oct 5 11:11:24.074: INFO: Created: latency-svc-rr642 Oct 5 11:11:24.083: INFO: Got endpoints: latency-svc-rr642 [866.750958ms] Oct 5 11:11:24.146: INFO: Created: latency-svc-hfxts Oct 5 11:11:24.149: INFO: Got endpoints: latency-svc-hfxts [872.992068ms] Oct 5 11:11:24.662: INFO: Created: latency-svc-h88k7 Oct 5 11:11:24.937: INFO: Got endpoints: latency-svc-h88k7 [1.624223982s] Oct 5 11:11:24.960: INFO: Created: latency-svc-vn9s6 Oct 5 11:11:25.410: INFO: Got endpoints: latency-svc-vn9s6 [2.061288528s] Oct 5 11:11:26.069: INFO: Created: latency-svc-c6jdz Oct 5 11:11:26.129: INFO: Got endpoints: latency-svc-c6jdz [2.727894053s] Oct 5 11:11:26.520: INFO: Created: latency-svc-qj645 Oct 5 11:11:26.524: INFO: Got endpoints: latency-svc-qj645 [3.082660712s] Oct 5 11:11:26.583: INFO: Created: latency-svc-z7k5s Oct 5 11:11:26.588: INFO: Got endpoints: latency-svc-z7k5s [3.10119814s] Oct 5 11:11:26.710: INFO: Created: latency-svc-zx48j Oct 5 11:11:26.715: INFO: Got endpoints: latency-svc-zx48j [3.137723098s] Oct 5 11:11:27.384: INFO: Created: latency-svc-kljzz Oct 5 11:11:27.424: INFO: Got endpoints: latency-svc-kljzz [3.784901262s] Oct 5 11:11:27.722: INFO: Created: latency-svc-xz78l Oct 5 11:11:27.734: INFO: Got endpoints: latency-svc-xz78l [4.025723309s] Oct 5 11:11:27.925: INFO: Created: latency-svc-29gzh Oct 5 11:11:27.929: INFO: Got endpoints: latency-svc-29gzh [4.162662402s] Oct 5 11:11:27.979: INFO: Created: latency-svc-cppwp Oct 5 11:11:28.023: INFO: Got endpoints: latency-svc-cppwp [4.172827752s] Oct 5 11:11:28.437: INFO: Created: latency-svc-slk9h Oct 5 11:11:28.547: INFO: Got endpoints: latency-svc-slk9h [4.631780025s] Oct 5 11:11:28.597: INFO: Created: latency-svc-g4tm7 Oct 5 11:11:28.627: INFO: Got endpoints: latency-svc-g4tm7 [4.601649267s] Oct 5 11:11:28.811: INFO: Created: latency-svc-5llpn Oct 5 11:11:28.901: INFO: Got endpoints: latency-svc-5llpn [4.860755093s] Oct 5 11:11:28.987: INFO: Created: latency-svc-zb996 Oct 5 11:11:29.116: INFO: Got endpoints: latency-svc-zb996 [5.031883384s] Oct 5 11:11:29.253: INFO: Created: latency-svc-2m54z Oct 5 11:11:29.255: INFO: Got endpoints: latency-svc-2m54z [5.106533625s] Oct 5 11:11:29.408: INFO: Created: latency-svc-wksjh Oct 5 11:11:29.418: INFO: Got endpoints: latency-svc-wksjh [4.480919349s] Oct 5 11:11:29.441: INFO: Created: latency-svc-fw67m Oct 5 11:11:29.455: INFO: Got endpoints: latency-svc-fw67m [4.04542491s] Oct 5 11:11:29.473: INFO: Created: latency-svc-wp7nk Oct 5 11:11:29.491: INFO: Got endpoints: latency-svc-wp7nk [3.360862723s] Oct 5 11:11:29.582: INFO: Created: latency-svc-zl6w6 Oct 5 11:11:29.583: INFO: Got endpoints: latency-svc-zl6w6 [3.059476899s] Oct 5 11:11:29.646: INFO: Created: latency-svc-d57wf Oct 5 11:11:29.660: INFO: Got endpoints: latency-svc-d57wf [3.071492385s] Oct 5 11:11:29.733: INFO: Created: latency-svc-c9x9p Oct 5 11:11:29.744: INFO: Got endpoints: latency-svc-c9x9p [3.028057787s] Oct 5 11:11:29.780: INFO: Created: latency-svc-jtxrh Oct 5 11:11:29.792: INFO: Got endpoints: latency-svc-jtxrh [2.367414296s] Oct 5 11:11:29.887: INFO: Created: latency-svc-rnhkt Oct 5 11:11:29.941: INFO: Created: latency-svc-tvnq7 Oct 5 11:11:29.942: INFO: Got endpoints: latency-svc-rnhkt [2.208295291s] Oct 5 11:11:29.967: INFO: Got endpoints: latency-svc-tvnq7 [2.03760512s] Oct 5 11:11:30.031: INFO: Created: latency-svc-94c85 Oct 5 11:11:30.084: INFO: Got endpoints: latency-svc-94c85 [2.061269239s] Oct 5 11:11:30.086: INFO: Created: latency-svc-ldcsf Oct 5 11:11:30.099: INFO: Got endpoints: latency-svc-ldcsf [1.551094436s] Oct 5 11:11:30.170: INFO: Created: latency-svc-9fmcq Oct 5 11:11:30.177: INFO: Got endpoints: latency-svc-9fmcq [1.549616583s] Oct 5 11:11:30.213: INFO: Created: latency-svc-xl64j Oct 5 11:11:30.240: INFO: Got endpoints: latency-svc-xl64j [1.338369534s] Oct 5 11:11:30.342: INFO: Created: latency-svc-qttl9 Oct 5 11:11:30.344: INFO: Got endpoints: latency-svc-qttl9 [1.228520705s] Oct 5 11:11:30.443: INFO: Created: latency-svc-vqhjv Oct 5 11:11:30.545: INFO: Got endpoints: latency-svc-vqhjv [1.289810602s] Oct 5 11:11:30.550: INFO: Created: latency-svc-ngd8v Oct 5 11:11:30.556: INFO: Got endpoints: latency-svc-ngd8v [1.137827853s] Oct 5 11:11:30.639: INFO: Created: latency-svc-wpsv7 Oct 5 11:11:30.641: INFO: Got endpoints: latency-svc-wpsv7 [1.185153587s] Oct 5 11:11:30.678: INFO: Created: latency-svc-wvhhp Oct 5 11:11:30.694: INFO: Got endpoints: latency-svc-wvhhp [1.202787905s] Oct 5 11:11:30.727: INFO: Created: latency-svc-m76jl Oct 5 11:11:30.743: INFO: Got endpoints: latency-svc-m76jl [1.159324828s] Oct 5 11:11:30.815: INFO: Created: latency-svc-qzf9q Oct 5 11:11:30.828: INFO: Got endpoints: latency-svc-qzf9q [1.168355183s] Oct 5 11:11:30.855: INFO: Created: latency-svc-96rg7 Oct 5 11:11:30.869: INFO: Got endpoints: latency-svc-96rg7 [1.125436333s] Oct 5 11:11:30.886: INFO: Created: latency-svc-rp2w9 Oct 5 11:11:30.910: INFO: Got endpoints: latency-svc-rp2w9 [1.117904061s] Oct 5 11:11:30.964: INFO: Created: latency-svc-7gpk4 Oct 5 11:11:30.968: INFO: Got endpoints: latency-svc-7gpk4 [1.025235243s] Oct 5 11:11:30.996: INFO: Created: latency-svc-4dwcq Oct 5 11:11:31.008: INFO: Got endpoints: latency-svc-4dwcq [1.040418097s] Oct 5 11:11:31.026: INFO: Created: latency-svc-wppsm Oct 5 11:11:31.051: INFO: Got endpoints: latency-svc-wppsm [965.937857ms] Oct 5 11:11:31.102: INFO: Created: latency-svc-df8xs Oct 5 11:11:31.126: INFO: Got endpoints: latency-svc-df8xs [1.026968269s] Oct 5 11:11:31.164: INFO: Created: latency-svc-lvxnt Oct 5 11:11:31.189: INFO: Got endpoints: latency-svc-lvxnt [1.01230096s] Oct 5 11:11:31.249: INFO: Created: latency-svc-tbc66 Oct 5 11:11:31.262: INFO: Got endpoints: latency-svc-tbc66 [1.021622298s] Oct 5 11:11:31.275: INFO: Created: latency-svc-f5vr7 Oct 5 11:11:31.292: INFO: Got endpoints: latency-svc-f5vr7 [947.635259ms] Oct 5 11:11:31.366: INFO: Created: latency-svc-k8z8g Oct 5 11:11:31.405: INFO: Got endpoints: latency-svc-k8z8g [859.664264ms] Oct 5 11:11:31.407: INFO: Created: latency-svc-wbwj8 Oct 5 11:11:31.424: INFO: Got endpoints: latency-svc-wbwj8 [867.815116ms] Oct 5 11:11:31.441: INFO: Created: latency-svc-nhv4p Oct 5 11:11:31.458: INFO: Got endpoints: latency-svc-nhv4p [816.848644ms] Oct 5 11:11:31.497: INFO: Created: latency-svc-nd7nl Oct 5 11:11:31.501: INFO: Got endpoints: latency-svc-nd7nl [807.2017ms] Oct 5 11:11:31.559: INFO: Created: latency-svc-p8rhf Oct 5 11:11:31.568: INFO: Got endpoints: latency-svc-p8rhf [824.740472ms] Oct 5 11:11:31.588: INFO: Created: latency-svc-2k9lb Oct 5 11:11:31.636: INFO: Got endpoints: latency-svc-2k9lb [807.052362ms] Oct 5 11:11:31.669: INFO: Created: latency-svc-kj942 Oct 5 11:11:31.689: INFO: Got endpoints: latency-svc-kj942 [819.199131ms] Oct 5 11:11:31.721: INFO: Created: latency-svc-nlvj5 Oct 5 11:11:31.731: INFO: Got endpoints: latency-svc-nlvj5 [821.127526ms] Oct 5 11:11:31.794: INFO: Created: latency-svc-dpx2d Oct 5 11:11:31.810: INFO: Got endpoints: latency-svc-dpx2d [841.973182ms] Oct 5 11:11:31.837: INFO: Created: latency-svc-pl7qh Oct 5 11:11:31.893: INFO: Got endpoints: latency-svc-pl7qh [885.096976ms] Oct 5 11:11:31.906: INFO: Created: latency-svc-qj5s7 Oct 5 11:11:31.919: INFO: Got endpoints: latency-svc-qj5s7 [868.124832ms] Oct 5 11:11:31.942: INFO: Created: latency-svc-w5f46 Oct 5 11:11:31.955: INFO: Got endpoints: latency-svc-w5f46 [828.307057ms] Oct 5 11:11:31.974: INFO: Created: latency-svc-rrv5n Oct 5 11:11:31.992: INFO: Got endpoints: latency-svc-rrv5n [802.667166ms] Oct 5 11:11:32.036: INFO: Created: latency-svc-77gr7 Oct 5 11:11:32.046: INFO: Got endpoints: latency-svc-77gr7 [784.095293ms] Oct 5 11:11:32.061: INFO: Created: latency-svc-qcmhb Oct 5 11:11:32.078: INFO: Got endpoints: latency-svc-qcmhb [785.044659ms] Oct 5 11:11:32.111: INFO: Created: latency-svc-pnkjp Oct 5 11:11:32.124: INFO: Got endpoints: latency-svc-pnkjp [718.613681ms] Oct 5 11:11:32.205: INFO: Created: latency-svc-dsr8m Oct 5 11:11:32.228: INFO: Got endpoints: latency-svc-dsr8m [804.256706ms] Oct 5 11:11:32.232: INFO: Created: latency-svc-52pxt Oct 5 11:11:32.248: INFO: Got endpoints: latency-svc-52pxt [789.961821ms] Oct 5 11:11:32.286: INFO: Created: latency-svc-c7xkw Oct 5 11:11:32.371: INFO: Got endpoints: latency-svc-c7xkw [870.471719ms] Oct 5 11:11:32.373: INFO: Created: latency-svc-spn7f Oct 5 11:11:32.382: INFO: Got endpoints: latency-svc-spn7f [814.492194ms] Oct 5 11:11:32.410: INFO: Created: latency-svc-tz82k Oct 5 11:11:32.440: INFO: Got endpoints: latency-svc-tz82k [804.471504ms] Oct 5 11:11:32.523: INFO: Created: latency-svc-cjcsb Oct 5 11:11:32.546: INFO: Created: latency-svc-t67qf Oct 5 11:11:32.546: INFO: Got endpoints: latency-svc-cjcsb [857.300958ms] Oct 5 11:11:32.570: INFO: Got endpoints: latency-svc-t67qf [838.423545ms] Oct 5 11:11:32.590: INFO: Created: latency-svc-w766j Oct 5 11:11:32.601: INFO: Got endpoints: latency-svc-w766j [791.02705ms] Oct 5 11:11:32.620: INFO: Created: latency-svc-c8bjk Oct 5 11:11:32.660: INFO: Got endpoints: latency-svc-c8bjk [766.61499ms] Oct 5 11:11:32.665: INFO: Created: latency-svc-4nx8l Oct 5 11:11:32.685: INFO: Got endpoints: latency-svc-4nx8l [766.068283ms] Oct 5 11:11:32.704: INFO: Created: latency-svc-wsmrs Oct 5 11:11:32.715: INFO: Got endpoints: latency-svc-wsmrs [760.615146ms] Oct 5 11:11:32.736: INFO: Created: latency-svc-8ttzn Oct 5 11:11:32.745: INFO: Got endpoints: latency-svc-8ttzn [752.705375ms] Oct 5 11:11:32.791: INFO: Created: latency-svc-4twss Oct 5 11:11:32.817: INFO: Got endpoints: latency-svc-4twss [771.22016ms] Oct 5 11:11:32.820: INFO: Created: latency-svc-2kk25 Oct 5 11:11:32.839: INFO: Got endpoints: latency-svc-2kk25 [761.236599ms] Oct 5 11:11:32.869: INFO: Created: latency-svc-zkcd5 Oct 5 11:11:32.885: INFO: Got endpoints: latency-svc-zkcd5 [760.384095ms] Oct 5 11:11:32.922: INFO: Created: latency-svc-jtzx8 Oct 5 11:11:32.945: INFO: Got endpoints: latency-svc-jtzx8 [716.029109ms] Oct 5 11:11:32.978: INFO: Created: latency-svc-xwv9v Oct 5 11:11:32.993: INFO: Got endpoints: latency-svc-xwv9v [744.908949ms] Oct 5 11:11:33.016: INFO: Created: latency-svc-xh49h Oct 5 11:11:33.067: INFO: Got endpoints: latency-svc-xh49h [695.258141ms] Oct 5 11:11:33.109: INFO: Created: latency-svc-f4wm4 Oct 5 11:11:33.132: INFO: Got endpoints: latency-svc-f4wm4 [749.252009ms] Oct 5 11:11:33.166: INFO: Created: latency-svc-xgsks Oct 5 11:11:33.216: INFO: Got endpoints: latency-svc-xgsks [775.625421ms] Oct 5 11:11:33.241: INFO: Created: latency-svc-bmqq6 Oct 5 11:11:33.265: INFO: Got endpoints: latency-svc-bmqq6 [718.802775ms] Oct 5 11:11:33.315: INFO: Created: latency-svc-jvs86 Oct 5 11:11:33.362: INFO: Got endpoints: latency-svc-jvs86 [791.522737ms] Oct 5 11:11:33.365: INFO: Created: latency-svc-kfsbt Oct 5 11:11:33.373: INFO: Got endpoints: latency-svc-kfsbt [771.386227ms] Oct 5 11:11:33.403: INFO: Created: latency-svc-fjjlj Oct 5 11:11:33.422: INFO: Got endpoints: latency-svc-fjjlj [762.163429ms] Oct 5 11:11:33.456: INFO: Created: latency-svc-tpj44 Oct 5 11:11:33.490: INFO: Got endpoints: latency-svc-tpj44 [804.827322ms] Oct 5 11:11:33.521: INFO: Created: latency-svc-gx7b7 Oct 5 11:11:33.536: INFO: Got endpoints: latency-svc-gx7b7 [820.572172ms] Oct 5 11:11:33.554: INFO: Created: latency-svc-wxjsg Oct 5 11:11:33.578: INFO: Got endpoints: latency-svc-wxjsg [832.572009ms] Oct 5 11:11:33.635: INFO: Created: latency-svc-nhjmr Oct 5 11:11:33.665: INFO: Got endpoints: latency-svc-nhjmr [847.912393ms] Oct 5 11:11:33.666: INFO: Created: latency-svc-47zlj Oct 5 11:11:33.701: INFO: Got endpoints: latency-svc-47zlj [861.530499ms] Oct 5 11:11:33.767: INFO: Created: latency-svc-xtw5p Oct 5 11:11:33.771: INFO: Got endpoints: latency-svc-xtw5p [886.439092ms] Oct 5 11:11:33.833: INFO: Created: latency-svc-75m5z Oct 5 11:11:33.906: INFO: Got endpoints: latency-svc-75m5z [960.64556ms] Oct 5 11:11:33.919: INFO: Created: latency-svc-pgg9l Oct 5 11:11:33.933: INFO: Got endpoints: latency-svc-pgg9l [939.833402ms] Oct 5 11:11:33.956: INFO: Created: latency-svc-bf2dm Oct 5 11:11:33.982: INFO: Got endpoints: latency-svc-bf2dm [914.722587ms] Oct 5 11:11:34.043: INFO: Created: latency-svc-7b6t8 Oct 5 11:11:34.054: INFO: Got endpoints: latency-svc-7b6t8 [922.082315ms] Oct 5 11:11:34.085: INFO: Created: latency-svc-gwvfs Oct 5 11:11:34.096: INFO: Got endpoints: latency-svc-gwvfs [879.594072ms] Oct 5 11:11:34.115: INFO: Created: latency-svc-7xcwx Oct 5 11:11:34.127: INFO: Got endpoints: latency-svc-7xcwx [862.082468ms] Oct 5 11:11:34.181: INFO: Created: latency-svc-zq24g Oct 5 11:11:34.208: INFO: Got endpoints: latency-svc-zq24g [845.825007ms] Oct 5 11:11:34.232: INFO: Created: latency-svc-7zc4r Oct 5 11:11:34.241: INFO: Got endpoints: latency-svc-7zc4r [868.395202ms] Oct 5 11:11:34.259: INFO: Created: latency-svc-f2bdr Oct 5 11:11:34.270: INFO: Got endpoints: latency-svc-f2bdr [848.223559ms] Oct 5 11:11:34.313: INFO: Created: latency-svc-mqfn9 Oct 5 11:11:34.320: INFO: Got endpoints: latency-svc-mqfn9 [829.291322ms] Oct 5 11:11:34.346: INFO: Created: latency-svc-d8hqn Oct 5 11:11:34.361: INFO: Got endpoints: latency-svc-d8hqn [825.173115ms] Oct 5 11:11:34.397: INFO: Created: latency-svc-8m9c2 Oct 5 11:11:34.475: INFO: Got endpoints: latency-svc-8m9c2 [897.060381ms] Oct 5 11:11:34.476: INFO: Created: latency-svc-nrp7l Oct 5 11:11:34.481: INFO: Got endpoints: latency-svc-nrp7l [815.818599ms] Oct 5 11:11:34.516: INFO: Created: latency-svc-q5gvl Oct 5 11:11:34.536: INFO: Got endpoints: latency-svc-q5gvl [835.084434ms] Oct 5 11:11:34.559: INFO: Created: latency-svc-bt99n Oct 5 11:11:34.573: INFO: Got endpoints: latency-svc-bt99n [801.889874ms] Oct 5 11:11:34.618: INFO: Created: latency-svc-gk79h Oct 5 11:11:34.634: INFO: Got endpoints: latency-svc-gk79h [728.682398ms] Oct 5 11:11:34.664: INFO: Created: latency-svc-54ntw Oct 5 11:11:34.678: INFO: Got endpoints: latency-svc-54ntw [744.654664ms] Oct 5 11:11:34.690: INFO: Created: latency-svc-j7kcd Oct 5 11:11:34.715: INFO: Got endpoints: latency-svc-j7kcd [732.564233ms] Oct 5 11:11:34.768: INFO: Created: latency-svc-hvk96 Oct 5 11:11:34.772: INFO: Got endpoints: latency-svc-hvk96 [717.31933ms] Oct 5 11:11:34.796: INFO: Created: latency-svc-dkg5t Oct 5 11:11:34.819: INFO: Got endpoints: latency-svc-dkg5t [722.354009ms] Oct 5 11:11:34.842: INFO: Created: latency-svc-f2rdb Oct 5 11:11:34.917: INFO: Got endpoints: latency-svc-f2rdb [789.325434ms] Oct 5 11:11:34.922: INFO: Created: latency-svc-fhw6p Oct 5 11:11:34.935: INFO: Got endpoints: latency-svc-fhw6p [727.04739ms] Oct 5 11:11:34.953: INFO: Created: latency-svc-ztg7r Oct 5 11:11:34.966: INFO: Got endpoints: latency-svc-ztg7r [724.422919ms] Oct 5 11:11:34.993: INFO: Created: latency-svc-hmk4b Oct 5 11:11:35.011: INFO: Got endpoints: latency-svc-hmk4b [740.657649ms] Oct 5 11:11:35.061: INFO: Created: latency-svc-vz7f8 Oct 5 11:11:35.066: INFO: Got endpoints: latency-svc-vz7f8 [746.184663ms] Oct 5 11:11:35.091: INFO: Created: latency-svc-645pz Oct 5 11:11:35.140: INFO: Got endpoints: latency-svc-645pz [778.792958ms] Oct 5 11:11:35.193: INFO: Created: latency-svc-6xfxf Oct 5 11:11:35.217: INFO: Got endpoints: latency-svc-6xfxf [740.876268ms] Oct 5 11:11:35.245: INFO: Created: latency-svc-dw2r7 Oct 5 11:11:35.254: INFO: Got endpoints: latency-svc-dw2r7 [772.58608ms] Oct 5 11:11:35.729: INFO: Created: latency-svc-7r98l Oct 5 11:11:36.002: INFO: Got endpoints: latency-svc-7r98l [1.465519505s] Oct 5 11:11:36.062: INFO: Created: latency-svc-nl4pk Oct 5 11:11:36.075: INFO: Got endpoints: latency-svc-nl4pk [1.50182598s] Oct 5 11:11:36.219: INFO: Created: latency-svc-mxtd7 Oct 5 11:11:36.330: INFO: Got endpoints: latency-svc-mxtd7 [1.69529674s] Oct 5 11:11:36.422: INFO: Created: latency-svc-gt6tw Oct 5 11:11:36.600: INFO: Got endpoints: latency-svc-gt6tw [1.921951853s] Oct 5 11:11:36.643: INFO: Created: latency-svc-224gg Oct 5 11:11:36.694: INFO: Got endpoints: latency-svc-224gg [1.979445047s] Oct 5 11:11:36.809: INFO: Created: latency-svc-8qvsn Oct 5 11:11:36.812: INFO: Got endpoints: latency-svc-8qvsn [2.040403354s] Oct 5 11:11:36.829: INFO: Created: latency-svc-z49kk Oct 5 11:11:37.710: INFO: Got endpoints: latency-svc-z49kk [2.891176452s] Oct 5 11:11:37.881: INFO: Created: latency-svc-pnr9s Oct 5 11:11:37.883: INFO: Got endpoints: latency-svc-pnr9s [2.966110279s] Oct 5 11:11:37.956: INFO: Created: latency-svc-f7xlf Oct 5 11:11:38.051: INFO: Got endpoints: latency-svc-f7xlf [3.115640803s] Oct 5 11:11:38.083: INFO: Created: latency-svc-vpkdq Oct 5 11:11:38.089: INFO: Got endpoints: latency-svc-vpkdq [3.122894132s] Oct 5 11:11:38.126: INFO: Created: latency-svc-lbvd2 Oct 5 11:11:38.132: INFO: Got endpoints: latency-svc-lbvd2 [3.12018318s] Oct 5 11:11:38.200: INFO: Created: latency-svc-5r75h Oct 5 11:11:38.201: INFO: Got endpoints: latency-svc-5r75h [3.135244538s] Oct 5 11:11:38.287: INFO: Created: latency-svc-226f6 Oct 5 11:11:38.343: INFO: Got endpoints: latency-svc-226f6 [3.202736683s] Oct 5 11:11:38.399: INFO: Created: latency-svc-mqx48 Oct 5 11:11:38.422: INFO: Got endpoints: latency-svc-mqx48 [3.205234278s] Oct 5 11:11:38.532: INFO: Created: latency-svc-t7fn6 Oct 5 11:11:38.553: INFO: Got endpoints: latency-svc-t7fn6 [3.298494308s] Oct 5 11:11:38.655: INFO: Created: latency-svc-4c76x Oct 5 11:11:38.661: INFO: Got endpoints: latency-svc-4c76x [2.65908751s] Oct 5 11:11:38.678: INFO: Created: latency-svc-tgpl8 Oct 5 11:11:38.708: INFO: Got endpoints: latency-svc-tgpl8 [2.632133373s] Oct 5 11:11:38.741: INFO: Created: latency-svc-x9js4 Oct 5 11:11:38.791: INFO: Got endpoints: latency-svc-x9js4 [2.460443401s] Oct 5 11:11:38.809: INFO: Created: latency-svc-tvmdt Oct 5 11:11:38.824: INFO: Got endpoints: latency-svc-tvmdt [2.224056963s] Oct 5 11:11:38.846: INFO: Created: latency-svc-7mww4 Oct 5 11:11:38.860: INFO: Got endpoints: latency-svc-7mww4 [2.165179649s] Oct 5 11:11:38.884: INFO: Created: latency-svc-ndnrw Oct 5 11:11:38.929: INFO: Got endpoints: latency-svc-ndnrw [2.116833464s] Oct 5 11:11:38.931: INFO: Latencies: [87.050793ms 182.728478ms 204.621423ms 253.27468ms 333.274697ms 384.913814ms 462.350414ms 508.13115ms 594.323826ms 630.68009ms 663.585365ms 695.258141ms 716.029109ms 717.31933ms 718.613681ms 718.802775ms 722.354009ms 724.422919ms 727.04739ms 728.682398ms 732.361336ms 732.564233ms 740.657649ms 740.876268ms 744.654664ms 744.908949ms 746.184663ms 749.252009ms 752.705375ms 760.384095ms 760.615146ms 761.236599ms 762.163429ms 766.068283ms 766.190329ms 766.61499ms 771.22016ms 771.386227ms 772.58608ms 774.695252ms 775.625421ms 778.792958ms 784.095293ms 785.044659ms 789.325434ms 789.491075ms 789.826205ms 789.961821ms 791.02705ms 791.522737ms 794.854758ms 797.89977ms 801.889874ms 802.667166ms 804.256706ms 804.471504ms 804.827322ms 807.052362ms 807.2017ms 808.389647ms 810.9001ms 814.439436ms 814.492194ms 815.818599ms 816.848644ms 818.671403ms 819.150726ms 819.199131ms 820.29952ms 820.572172ms 821.127526ms 824.740472ms 824.806487ms 825.030042ms 825.173115ms 826.923532ms 828.307057ms 829.291322ms 832.572009ms 835.084434ms 835.316629ms 838.423545ms 841.973182ms 845.825007ms 845.981473ms 847.912393ms 848.223559ms 849.043732ms 850.704252ms 854.158622ms 855.455844ms 857.300958ms 859.664264ms 861.530499ms 862.082468ms 862.256982ms 866.750958ms 867.815116ms 868.124832ms 868.395202ms 870.471719ms 871.656198ms 872.992068ms 873.045257ms 874.258082ms 875.94655ms 879.594072ms 883.958485ms 883.99984ms 884.592995ms 885.096976ms 886.030035ms 886.439092ms 887.669879ms 897.060381ms 897.477959ms 899.241271ms 899.470906ms 900.938724ms 911.42321ms 913.083968ms 914.722587ms 915.047692ms 915.370003ms 922.082315ms 924.16402ms 928.01033ms 928.010898ms 932.807465ms 939.833402ms 940.379585ms 947.635259ms 955.210914ms 960.64556ms 964.846198ms 965.937857ms 997.342036ms 1.01230096s 1.021622298s 1.025235243s 1.026968269s 1.040418097s 1.117904061s 1.125436333s 1.137827853s 1.159324828s 1.168355183s 1.185153587s 1.202787905s 1.228520705s 1.289810602s 1.338369534s 1.465519505s 1.50182598s 1.549616583s 1.551094436s 1.624223982s 1.69529674s 1.921951853s 1.979445047s 2.03760512s 2.040403354s 2.061269239s 2.061288528s 2.116833464s 2.165179649s 2.208295291s 2.224056963s 2.367414296s 2.460443401s 2.632133373s 2.65908751s 2.727894053s 2.891176452s 2.966110279s 3.028057787s 3.059476899s 3.071492385s 3.082660712s 3.10119814s 3.115640803s 3.12018318s 3.122894132s 3.135244538s 3.137723098s 3.202736683s 3.205234278s 3.298494308s 3.360862723s 3.784901262s 4.025723309s 4.04542491s 4.162662402s 4.172827752s 4.480919349s 4.601649267s 4.631780025s 4.860755093s 5.031883384s 5.106533625s] Oct 5 11:11:38.934: INFO: 50 %ile: 870.471719ms Oct 5 11:11:38.934: INFO: 90 %ile: 3.115640803s Oct 5 11:11:38.934: INFO: 99 %ile: 5.031883384s Oct 5 11:11:38.935: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:11:38.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2873" for this suite. • [SLOW TEST:25.770 seconds] [sig-network] Service endpoints latency /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":303,"completed":227,"skipped":3813,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:11:38.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating server pod server in namespace prestop-9349 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-9349 STEP: Deleting pre-stop pod Oct 5 11:11:54.808: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:11:54.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-9349" for this suite. • [SLOW TEST:16.654 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should call prestop when killing a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":303,"completed":228,"skipped":3820,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:11:55.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1005 11:12:37.888699 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 11:13:39.913: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Oct 5 11:13:39.913: INFO: Deleting pod "simpletest.rc-2s6jx" in namespace "gc-5962" Oct 5 11:13:39.933: INFO: Deleting pod "simpletest.rc-d9s5r" in namespace "gc-5962" Oct 5 11:13:39.987: INFO: Deleting pod "simpletest.rc-fqmkd" in namespace "gc-5962" Oct 5 11:13:40.058: INFO: Deleting pod "simpletest.rc-l29rj" in namespace "gc-5962" Oct 5 11:13:40.118: INFO: Deleting pod "simpletest.rc-mlhw6" in namespace "gc-5962" Oct 5 11:13:40.716: INFO: Deleting pod "simpletest.rc-sgdj6" in namespace "gc-5962" Oct 5 11:13:40.803: INFO: Deleting pod "simpletest.rc-tc8f7" in namespace "gc-5962" Oct 5 11:13:41.213: INFO: Deleting pod "simpletest.rc-v5lj4" in namespace "gc-5962" Oct 5 11:13:41.338: INFO: Deleting pod "simpletest.rc-x4z45" in namespace "gc-5962" Oct 5 11:13:41.598: INFO: Deleting pod "simpletest.rc-zfxdf" in namespace "gc-5962" [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:13:42.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5962" for this suite. • [SLOW TEST:106.643 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":303,"completed":229,"skipped":3825,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:13:42.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name projected-secret-test-6458d9f4-caad-4610-bd30-5075356b87d9 STEP: Creating a pod to test consume secrets Oct 5 11:13:43.436: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29" in namespace "projected-9102" to be "Succeeded or Failed" Oct 5 11:13:43.659: INFO: Pod "pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29": Phase="Pending", Reason="", readiness=false. Elapsed: 223.110234ms Oct 5 11:13:45.671: INFO: Pod "pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235043958s Oct 5 11:13:47.708: INFO: Pod "pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271972857s Oct 5 11:13:49.716: INFO: Pod "pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.279762121s STEP: Saw pod success Oct 5 11:13:49.716: INFO: Pod "pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29" satisfied condition "Succeeded or Failed" Oct 5 11:13:49.722: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29 container projected-secret-volume-test: STEP: delete the pod Oct 5 11:13:49.819: INFO: Waiting for pod pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29 to disappear Oct 5 11:13:49.825: INFO: Pod pod-projected-secrets-0d127690-6101-4002-828b-f62e248a2b29 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:13:49.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9102" for this suite. • [SLOW TEST:7.580 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":230,"skipped":3834,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:13:49.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service nodeport-test with type=NodePort in namespace services-9170 STEP: creating replication controller nodeport-test in namespace services-9170 I1005 11:13:50.052669 10 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-9170, replica count: 2 I1005 11:13:53.104055 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:13:56.105435 10 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 11:13:56.105: INFO: Creating new exec pod Oct 5 11:14:01.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9170 execpod8v4sb -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Oct 5 11:14:05.692: INFO: stderr: "I1005 11:14:05.569463 3618 log.go:181] (0x2af6000) (0x2af6070) Create stream\nI1005 11:14:05.572132 3618 log.go:181] (0x2af6000) (0x2af6070) Stream added, broadcasting: 1\nI1005 11:14:05.585463 3618 log.go:181] (0x2af6000) Reply frame received for 1\nI1005 11:14:05.585916 3618 log.go:181] (0x2af6000) (0x28cc070) Create stream\nI1005 11:14:05.586002 3618 log.go:181] (0x2af6000) (0x28cc070) Stream added, broadcasting: 3\nI1005 11:14:05.587396 3618 log.go:181] (0x2af6000) Reply frame received for 3\nI1005 11:14:05.587767 3618 log.go:181] (0x2af6000) (0x28cc230) Create stream\nI1005 11:14:05.587859 3618 log.go:181] (0x2af6000) (0x28cc230) Stream added, broadcasting: 5\nI1005 11:14:05.589215 3618 log.go:181] (0x2af6000) Reply frame received for 5\nI1005 11:14:05.673721 3618 log.go:181] (0x2af6000) Data frame received for 5\nI1005 11:14:05.673931 3618 log.go:181] (0x28cc230) (5) Data frame handling\nI1005 11:14:05.674353 3618 log.go:181] (0x28cc230) (5) Data frame sent\nI1005 11:14:05.674481 3618 log.go:181] (0x2af6000) Data frame received for 3\nI1005 11:14:05.674632 3618 log.go:181] (0x28cc070) (3) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nI1005 11:14:05.674848 3618 log.go:181] (0x2af6000) Data frame received for 5\nI1005 11:14:05.674938 3618 log.go:181] (0x28cc230) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI1005 11:14:05.676993 3618 log.go:181] (0x28cc230) (5) Data frame sent\nI1005 11:14:05.677304 3618 log.go:181] (0x2af6000) Data frame received for 5\nI1005 11:14:05.677478 3618 log.go:181] (0x28cc230) (5) Data frame handling\nI1005 11:14:05.677639 3618 log.go:181] (0x2af6000) Data frame received for 1\nI1005 11:14:05.677721 3618 log.go:181] (0x2af6070) (1) Data frame handling\nI1005 11:14:05.677811 3618 log.go:181] (0x2af6070) (1) Data frame sent\nI1005 11:14:05.678982 3618 log.go:181] (0x2af6000) (0x2af6070) Stream removed, broadcasting: 1\nI1005 11:14:05.680659 3618 log.go:181] (0x2af6000) Go away received\nI1005 11:14:05.683532 3618 log.go:181] (0x2af6000) (0x2af6070) Stream removed, broadcasting: 1\nI1005 11:14:05.683807 3618 log.go:181] (0x2af6000) (0x28cc070) Stream removed, broadcasting: 3\nI1005 11:14:05.684019 3618 log.go:181] (0x2af6000) (0x28cc230) Stream removed, broadcasting: 5\n" Oct 5 11:14:05.693: INFO: stdout: "" Oct 5 11:14:05.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9170 execpod8v4sb -- /bin/sh -x -c nc -zv -t -w 2 10.97.176.116 80' Oct 5 11:14:07.223: INFO: stderr: "I1005 11:14:07.100176 3639 log.go:181] (0x29f20e0) (0x29f2150) Create stream\nI1005 11:14:07.102212 3639 log.go:181] (0x29f20e0) (0x29f2150) Stream added, broadcasting: 1\nI1005 11:14:07.117762 3639 log.go:181] (0x29f20e0) Reply frame received for 1\nI1005 11:14:07.118212 3639 log.go:181] (0x29f20e0) (0x24f2380) Create stream\nI1005 11:14:07.118281 3639 log.go:181] (0x29f20e0) (0x24f2380) Stream added, broadcasting: 3\nI1005 11:14:07.119478 3639 log.go:181] (0x29f20e0) Reply frame received for 3\nI1005 11:14:07.119738 3639 log.go:181] (0x29f20e0) (0x2d60070) Create stream\nI1005 11:14:07.119812 3639 log.go:181] (0x29f20e0) (0x2d60070) Stream added, broadcasting: 5\nI1005 11:14:07.121015 3639 log.go:181] (0x29f20e0) Reply frame received for 5\nI1005 11:14:07.205744 3639 log.go:181] (0x29f20e0) Data frame received for 3\nI1005 11:14:07.206009 3639 log.go:181] (0x24f2380) (3) Data frame handling\nI1005 11:14:07.206241 3639 log.go:181] (0x29f20e0) Data frame received for 5\nI1005 11:14:07.206520 3639 log.go:181] (0x2d60070) (5) Data frame handling\nI1005 11:14:07.206677 3639 log.go:181] (0x29f20e0) Data frame received for 1\nI1005 11:14:07.206863 3639 log.go:181] (0x29f2150) (1) Data frame handling\n+ nc -zv -t -w 2 10.97.176.116 80\nConnection to 10.97.176.116 80 port [tcp/http] succeeded!\nI1005 11:14:07.208666 3639 log.go:181] (0x2d60070) (5) Data frame sent\nI1005 11:14:07.210422 3639 log.go:181] (0x29f20e0) Data frame received for 5\nI1005 11:14:07.210630 3639 log.go:181] (0x2d60070) (5) Data frame handling\nI1005 11:14:07.211460 3639 log.go:181] (0x29f2150) (1) Data frame sent\nI1005 11:14:07.212334 3639 log.go:181] (0x29f20e0) (0x29f2150) Stream removed, broadcasting: 1\nI1005 11:14:07.212742 3639 log.go:181] (0x29f20e0) Go away received\nI1005 11:14:07.215184 3639 log.go:181] (0x29f20e0) (0x29f2150) Stream removed, broadcasting: 1\nI1005 11:14:07.215398 3639 log.go:181] (0x29f20e0) (0x24f2380) Stream removed, broadcasting: 3\nI1005 11:14:07.215638 3639 log.go:181] (0x29f20e0) (0x2d60070) Stream removed, broadcasting: 5\n" Oct 5 11:14:07.225: INFO: stdout: "" Oct 5 11:14:07.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9170 execpod8v4sb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32591' Oct 5 11:14:08.681: INFO: stderr: "I1005 11:14:08.588618 3659 log.go:181] (0x26278f0) (0x26279d0) Create stream\nI1005 11:14:08.593208 3659 log.go:181] (0x26278f0) (0x26279d0) Stream added, broadcasting: 1\nI1005 11:14:08.605357 3659 log.go:181] (0x26278f0) Reply frame received for 1\nI1005 11:14:08.606039 3659 log.go:181] (0x26278f0) (0x2daa070) Create stream\nI1005 11:14:08.606119 3659 log.go:181] (0x26278f0) (0x2daa070) Stream added, broadcasting: 3\nI1005 11:14:08.607692 3659 log.go:181] (0x26278f0) Reply frame received for 3\nI1005 11:14:08.607935 3659 log.go:181] (0x26278f0) (0x247c310) Create stream\nI1005 11:14:08.608004 3659 log.go:181] (0x26278f0) (0x247c310) Stream added, broadcasting: 5\nI1005 11:14:08.609577 3659 log.go:181] (0x26278f0) Reply frame received for 5\nI1005 11:14:08.656401 3659 log.go:181] (0x26278f0) Data frame received for 5\nI1005 11:14:08.656617 3659 log.go:181] (0x26278f0) Data frame received for 3\nI1005 11:14:08.657149 3659 log.go:181] (0x247c310) (5) Data frame handling\nI1005 11:14:08.657441 3659 log.go:181] (0x2daa070) (3) Data frame handling\nI1005 11:14:08.658236 3659 log.go:181] (0x26278f0) Data frame received for 1\nI1005 11:14:08.658335 3659 log.go:181] (0x26279d0) (1) Data frame handling\nI1005 11:14:08.658566 3659 log.go:181] (0x26279d0) (1) Data frame sent\nI1005 11:14:08.658635 3659 log.go:181] (0x247c310) (5) Data frame sent\nI1005 11:14:08.658746 3659 log.go:181] (0x26278f0) Data frame received for 5\nI1005 11:14:08.658796 3659 log.go:181] (0x247c310) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.12 32591\nConnection to 172.18.0.12 32591 port [tcp/32591] succeeded!\nI1005 11:14:08.659743 3659 log.go:181] (0x247c310) (5) Data frame sent\nI1005 11:14:08.659838 3659 log.go:181] (0x26278f0) Data frame received for 5\nI1005 11:14:08.659908 3659 log.go:181] (0x247c310) (5) Data frame handling\nI1005 11:14:08.660421 3659 log.go:181] (0x26278f0) (0x26279d0) Stream removed, broadcasting: 1\nI1005 11:14:08.662154 3659 log.go:181] (0x26278f0) Go away received\nI1005 11:14:08.674626 3659 log.go:181] (0x26278f0) (0x26279d0) Stream removed, broadcasting: 1\nI1005 11:14:08.674897 3659 log.go:181] (0x26278f0) (0x2daa070) Stream removed, broadcasting: 3\nI1005 11:14:08.675010 3659 log.go:181] (0x26278f0) (0x247c310) Stream removed, broadcasting: 5\n" Oct 5 11:14:08.681: INFO: stdout: "" Oct 5 11:14:08.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-9170 execpod8v4sb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32591' Oct 5 11:14:10.130: INFO: stderr: "I1005 11:14:10.039093 3679 log.go:181] (0x2f2c000) (0x2f2c070) Create stream\nI1005 11:14:10.041831 3679 log.go:181] (0x2f2c000) (0x2f2c070) Stream added, broadcasting: 1\nI1005 11:14:10.049253 3679 log.go:181] (0x2f2c000) Reply frame received for 1\nI1005 11:14:10.049691 3679 log.go:181] (0x2f2c000) (0x30a8070) Create stream\nI1005 11:14:10.049750 3679 log.go:181] (0x2f2c000) (0x30a8070) Stream added, broadcasting: 3\nI1005 11:14:10.051636 3679 log.go:181] (0x2f2c000) Reply frame received for 3\nI1005 11:14:10.052197 3679 log.go:181] (0x2f2c000) (0x30a82a0) Create stream\nI1005 11:14:10.052327 3679 log.go:181] (0x2f2c000) (0x30a82a0) Stream added, broadcasting: 5\nI1005 11:14:10.054426 3679 log.go:181] (0x2f2c000) Reply frame received for 5\nI1005 11:14:10.114043 3679 log.go:181] (0x2f2c000) Data frame received for 5\nI1005 11:14:10.114305 3679 log.go:181] (0x2f2c000) Data frame received for 3\nI1005 11:14:10.114508 3679 log.go:181] (0x30a8070) (3) Data frame handling\nI1005 11:14:10.114743 3679 log.go:181] (0x2f2c000) Data frame received for 1\nI1005 11:14:10.114806 3679 log.go:181] (0x2f2c070) (1) Data frame handling\nI1005 11:14:10.114930 3679 log.go:181] (0x30a82a0) (5) Data frame handling\nI1005 11:14:10.116301 3679 log.go:181] (0x30a82a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.13 32591\nConnection to 172.18.0.13 32591 port [tcp/32591] succeeded!\nI1005 11:14:10.117082 3679 log.go:181] (0x2f2c070) (1) Data frame sent\nI1005 11:14:10.117381 3679 log.go:181] (0x2f2c000) Data frame received for 5\nI1005 11:14:10.117509 3679 log.go:181] (0x30a82a0) (5) Data frame handling\nI1005 11:14:10.118392 3679 log.go:181] (0x2f2c000) (0x2f2c070) Stream removed, broadcasting: 1\nI1005 11:14:10.120280 3679 log.go:181] (0x2f2c000) Go away received\nI1005 11:14:10.122145 3679 log.go:181] (0x2f2c000) (0x2f2c070) Stream removed, broadcasting: 1\nI1005 11:14:10.122372 3679 log.go:181] (0x2f2c000) (0x30a8070) Stream removed, broadcasting: 3\nI1005 11:14:10.122576 3679 log.go:181] (0x2f2c000) (0x30a82a0) Stream removed, broadcasting: 5\n" Oct 5 11:14:10.131: INFO: stdout: "" [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:14:10.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9170" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:20.302 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":303,"completed":231,"skipped":3853,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:14:10.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 11:14:10.234: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d39cc59-aa8e-4cb3-ad3c-1f3a794b457c" in namespace "projected-7263" to be "Succeeded or Failed" Oct 5 11:14:10.267: INFO: Pod "downwardapi-volume-4d39cc59-aa8e-4cb3-ad3c-1f3a794b457c": Phase="Pending", Reason="", readiness=false. Elapsed: 32.636608ms Oct 5 11:14:12.276: INFO: Pod "downwardapi-volume-4d39cc59-aa8e-4cb3-ad3c-1f3a794b457c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041181138s Oct 5 11:14:14.284: INFO: Pod "downwardapi-volume-4d39cc59-aa8e-4cb3-ad3c-1f3a794b457c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049732654s STEP: Saw pod success Oct 5 11:14:14.285: INFO: Pod "downwardapi-volume-4d39cc59-aa8e-4cb3-ad3c-1f3a794b457c" satisfied condition "Succeeded or Failed" Oct 5 11:14:14.290: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-4d39cc59-aa8e-4cb3-ad3c-1f3a794b457c container client-container: STEP: delete the pod Oct 5 11:14:14.338: INFO: Waiting for pod downwardapi-volume-4d39cc59-aa8e-4cb3-ad3c-1f3a794b457c to disappear Oct 5 11:14:14.351: INFO: Pod downwardapi-volume-4d39cc59-aa8e-4cb3-ad3c-1f3a794b457c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:14:14.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7263" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":232,"skipped":3854,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:14:14.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 11:14:14.466: INFO: Waiting up to 5m0s for pod "downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2" in namespace "downward-api-1805" to be "Succeeded or Failed" Oct 5 11:14:14.494: INFO: Pod "downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2": Phase="Pending", Reason="", readiness=false. Elapsed: 28.290089ms Oct 5 11:14:16.650: INFO: Pod "downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183699229s Oct 5 11:14:18.655: INFO: Pod "downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.189459331s Oct 5 11:14:20.664: INFO: Pod "downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.197618743s STEP: Saw pod success Oct 5 11:14:20.664: INFO: Pod "downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2" satisfied condition "Succeeded or Failed" Oct 5 11:14:20.669: INFO: Trying to get logs from node kali-worker2 pod downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2 container dapi-container: STEP: delete the pod Oct 5 11:14:20.740: INFO: Waiting for pod downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2 to disappear Oct 5 11:14:20.747: INFO: Pod downward-api-1eb57db0-b6d7-42a0-8c6a-b5c0059affb2 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:14:20.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1805" for this suite. • [SLOW TEST:6.398 seconds] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":303,"completed":233,"skipped":3872,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:14:20.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0777 on tmpfs Oct 5 11:14:20.837: INFO: Waiting up to 5m0s for pod "pod-7a91473f-cbc3-4fef-ae52-2751a3da8636" in namespace "emptydir-1925" to be "Succeeded or Failed" Oct 5 11:14:20.864: INFO: Pod "pod-7a91473f-cbc3-4fef-ae52-2751a3da8636": Phase="Pending", Reason="", readiness=false. Elapsed: 26.398683ms Oct 5 11:14:22.871: INFO: Pod "pod-7a91473f-cbc3-4fef-ae52-2751a3da8636": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033120243s Oct 5 11:14:24.877: INFO: Pod "pod-7a91473f-cbc3-4fef-ae52-2751a3da8636": Phase="Running", Reason="", readiness=true. Elapsed: 4.039443779s Oct 5 11:14:26.884: INFO: Pod "pod-7a91473f-cbc3-4fef-ae52-2751a3da8636": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046764688s STEP: Saw pod success Oct 5 11:14:26.885: INFO: Pod "pod-7a91473f-cbc3-4fef-ae52-2751a3da8636" satisfied condition "Succeeded or Failed" Oct 5 11:14:26.890: INFO: Trying to get logs from node kali-worker2 pod pod-7a91473f-cbc3-4fef-ae52-2751a3da8636 container test-container: STEP: delete the pod Oct 5 11:14:26.950: INFO: Waiting for pod pod-7a91473f-cbc3-4fef-ae52-2751a3da8636 to disappear Oct 5 11:14:26.963: INFO: Pod pod-7a91473f-cbc3-4fef-ae52-2751a3da8636 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:14:26.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1925" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":234,"skipped":3875,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:14:26.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 11:14:27.089: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9518bb14-bb00-4408-91bb-e0bf99bdbc2b" in namespace "downward-api-1436" to be "Succeeded or Failed" Oct 5 11:14:27.111: INFO: Pod "downwardapi-volume-9518bb14-bb00-4408-91bb-e0bf99bdbc2b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.757276ms Oct 5 11:14:29.122: INFO: Pod "downwardapi-volume-9518bb14-bb00-4408-91bb-e0bf99bdbc2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033318658s Oct 5 11:14:31.130: INFO: Pod "downwardapi-volume-9518bb14-bb00-4408-91bb-e0bf99bdbc2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041034084s STEP: Saw pod success Oct 5 11:14:31.130: INFO: Pod "downwardapi-volume-9518bb14-bb00-4408-91bb-e0bf99bdbc2b" satisfied condition "Succeeded or Failed" Oct 5 11:14:31.136: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-9518bb14-bb00-4408-91bb-e0bf99bdbc2b container client-container: STEP: delete the pod Oct 5 11:14:31.483: INFO: Waiting for pod downwardapi-volume-9518bb14-bb00-4408-91bb-e0bf99bdbc2b to disappear Oct 5 11:14:31.495: INFO: Pod downwardapi-volume-9518bb14-bb00-4408-91bb-e0bf99bdbc2b no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:14:31.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1436" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":303,"completed":235,"skipped":3879,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:14:31.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:14:31.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2465" for this suite. STEP: Destroying namespace "nspatchtest-6b09854a-a0fb-46ab-8333-1e24c3a1d5c4-9513" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":303,"completed":236,"skipped":3900,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:14:31.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-projected-krn5 STEP: Creating a pod to test atomic-volume-subpath Oct 5 11:14:31.857: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-krn5" in namespace "subpath-7763" to be "Succeeded or Failed" Oct 5 11:14:31.919: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Pending", Reason="", readiness=false. Elapsed: 61.765443ms Oct 5 11:14:33.925: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067837703s Oct 5 11:14:35.931: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 4.073652721s Oct 5 11:14:37.939: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 6.081316704s Oct 5 11:14:39.946: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 8.088967675s Oct 5 11:14:41.953: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 10.095511658s Oct 5 11:14:44.021: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 12.163790073s Oct 5 11:14:46.029: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 14.171891253s Oct 5 11:14:48.037: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 16.179816952s Oct 5 11:14:50.045: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 18.187842531s Oct 5 11:14:52.052: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 20.194779655s Oct 5 11:14:54.061: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Running", Reason="", readiness=true. Elapsed: 22.20321767s Oct 5 11:14:56.067: INFO: Pod "pod-subpath-test-projected-krn5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.209201902s STEP: Saw pod success Oct 5 11:14:56.067: INFO: Pod "pod-subpath-test-projected-krn5" satisfied condition "Succeeded or Failed" Oct 5 11:14:56.071: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-krn5 container test-container-subpath-projected-krn5: STEP: delete the pod Oct 5 11:14:56.405: INFO: Waiting for pod pod-subpath-test-projected-krn5 to disappear Oct 5 11:14:56.436: INFO: Pod pod-subpath-test-projected-krn5 no longer exists STEP: Deleting pod pod-subpath-test-projected-krn5 Oct 5 11:14:56.436: INFO: Deleting pod "pod-subpath-test-projected-krn5" in namespace "subpath-7763" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:14:56.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7763" for this suite. • [SLOW TEST:24.698 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":303,"completed":237,"skipped":3911,"failed":0} [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:14:56.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating projection with secret that has name secret-emptykey-test-4e70c453-673a-4c8f-b06c-d9ba4d41ea7b [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:14:56.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9284" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":303,"completed":238,"skipped":3911,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:14:56.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 11:15:17.178: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 11:15:19.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493317, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493317, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493317, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493317, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 11:15:22.247: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:15:22.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4350-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:15:23.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5562" for this suite. STEP: Destroying namespace "webhook-5562-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:26.927 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":303,"completed":239,"skipped":3961,"failed":0} [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:15:23.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [BeforeEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 STEP: creating the pod Oct 5 11:15:23.556: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9352' Oct 5 11:15:25.903: INFO: stderr: "" Oct 5 11:15:25.903: INFO: stdout: "pod/pause created\n" Oct 5 11:15:25.903: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Oct 5 11:15:25.903: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9352" to be "running and ready" Oct 5 11:15:25.911: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 7.565176ms Oct 5 11:15:27.919: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015908242s Oct 5 11:15:29.927: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.023686957s Oct 5 11:15:29.927: INFO: Pod "pause" satisfied condition "running and ready" Oct 5 11:15:29.927: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: adding the label testing-label with value testing-label-value to a pod Oct 5 11:15:29.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9352' Oct 5 11:15:31.229: INFO: stderr: "" Oct 5 11:15:31.229: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Oct 5 11:15:31.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9352' Oct 5 11:15:32.425: INFO: stderr: "" Oct 5 11:15:32.426: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod Oct 5 11:15:32.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9352' Oct 5 11:15:33.709: INFO: stderr: "" Oct 5 11:15:33.709: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Oct 5 11:15:33.710: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9352' Oct 5 11:15:34.969: INFO: stderr: "" Oct 5 11:15:34.969: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1340 STEP: using delete to clean up resources Oct 5 11:15:34.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9352' Oct 5 11:15:36.173: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 11:15:36.174: INFO: stdout: "pod \"pause\" force deleted\n" Oct 5 11:15:36.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9352' Oct 5 11:15:37.408: INFO: stderr: "No resources found in kubectl-9352 namespace.\n" Oct 5 11:15:37.408: INFO: stdout: "" Oct 5 11:15:37.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9352 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Oct 5 11:15:38.646: INFO: stderr: "" Oct 5 11:15:38.646: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:15:38.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9352" for this suite. • [SLOW TEST:15.174 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1330 should update the label on a resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":303,"completed":240,"skipped":3961,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:15:38.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 11:15:50.092: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 11:15:52.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493350, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493350, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493350, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493350, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 11:15:55.150: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:15:55.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3084-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:15:56.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8171" for this suite. STEP: Destroying namespace "webhook-8171-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.855 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":303,"completed":241,"skipped":3979,"failed":0} SS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:15:56.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:15:56.620: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-e3bd30fa-8a5b-4e63-9f69-913fc4bafecb" in namespace "security-context-test-5247" to be "Succeeded or Failed" Oct 5 11:15:56.642: INFO: Pod "busybox-readonly-false-e3bd30fa-8a5b-4e63-9f69-913fc4bafecb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.462963ms Oct 5 11:15:58.650: INFO: Pod "busybox-readonly-false-e3bd30fa-8a5b-4e63-9f69-913fc4bafecb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02972983s Oct 5 11:16:00.657: INFO: Pod "busybox-readonly-false-e3bd30fa-8a5b-4e63-9f69-913fc4bafecb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036253771s Oct 5 11:16:00.657: INFO: Pod "busybox-readonly-false-e3bd30fa-8a5b-4e63-9f69-913fc4bafecb" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:16:00.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5247" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":303,"completed":242,"skipped":3981,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:16:00.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-67c8b0fa-3b30-4554-a2d2-563f2c058c5d STEP: Creating a pod to test consume configMaps Oct 5 11:16:00.803: INFO: Waiting up to 5m0s for pod "pod-configmaps-dac61a0e-3e94-4b28-8e07-c5dfc5244976" in namespace "configmap-6568" to be "Succeeded or Failed" Oct 5 11:16:00.816: INFO: Pod "pod-configmaps-dac61a0e-3e94-4b28-8e07-c5dfc5244976": Phase="Pending", Reason="", readiness=false. Elapsed: 13.152022ms Oct 5 11:16:02.824: INFO: Pod "pod-configmaps-dac61a0e-3e94-4b28-8e07-c5dfc5244976": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020762657s Oct 5 11:16:04.829: INFO: Pod "pod-configmaps-dac61a0e-3e94-4b28-8e07-c5dfc5244976": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026539451s STEP: Saw pod success Oct 5 11:16:04.830: INFO: Pod "pod-configmaps-dac61a0e-3e94-4b28-8e07-c5dfc5244976" satisfied condition "Succeeded or Failed" Oct 5 11:16:04.834: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-dac61a0e-3e94-4b28-8e07-c5dfc5244976 container configmap-volume-test: STEP: delete the pod Oct 5 11:16:04.881: INFO: Waiting for pod pod-configmaps-dac61a0e-3e94-4b28-8e07-c5dfc5244976 to disappear Oct 5 11:16:04.893: INFO: Pod pod-configmaps-dac61a0e-3e94-4b28-8e07-c5dfc5244976 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:16:04.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6568" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":243,"skipped":3997,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:16:04.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Oct 5 11:16:15.739: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Oct 5 11:16:17.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493375, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493375, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493375, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493375, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:16:19.769: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493375, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493375, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493375, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493375, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-85d57b96d6\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 11:16:22.816: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:16:22.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:16:24.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7606" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:19.274 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":303,"completed":244,"skipped":4004,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:16:24.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir 0666 on node default medium Oct 5 11:16:24.291: INFO: Waiting up to 5m0s for pod "pod-4bed14fb-12c9-4e99-9ce4-16bd0e2a32e2" in namespace "emptydir-5089" to be "Succeeded or Failed" Oct 5 11:16:24.326: INFO: Pod "pod-4bed14fb-12c9-4e99-9ce4-16bd0e2a32e2": Phase="Pending", Reason="", readiness=false. Elapsed: 35.187535ms Oct 5 11:16:26.363: INFO: Pod "pod-4bed14fb-12c9-4e99-9ce4-16bd0e2a32e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071985501s Oct 5 11:16:28.370: INFO: Pod "pod-4bed14fb-12c9-4e99-9ce4-16bd0e2a32e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079131216s STEP: Saw pod success Oct 5 11:16:28.371: INFO: Pod "pod-4bed14fb-12c9-4e99-9ce4-16bd0e2a32e2" satisfied condition "Succeeded or Failed" Oct 5 11:16:28.376: INFO: Trying to get logs from node kali-worker2 pod pod-4bed14fb-12c9-4e99-9ce4-16bd0e2a32e2 container test-container: STEP: delete the pod Oct 5 11:16:28.552: INFO: Waiting for pod pod-4bed14fb-12c9-4e99-9ce4-16bd0e2a32e2 to disappear Oct 5 11:16:28.565: INFO: Pod pod-4bed14fb-12c9-4e99-9ce4-16bd0e2a32e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:16:28.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5089" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":245,"skipped":4015,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:16:28.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Performing setup for networking test in namespace pod-network-test-7014 STEP: creating a selector STEP: Creating the service pods in kubernetes Oct 5 11:16:28.711: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Oct 5 11:16:28.785: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 11:16:30.974: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 11:16:32.793: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Oct 5 11:16:34.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 11:16:36.794: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 11:16:38.794: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 11:16:40.795: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 11:16:42.793: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 11:16:44.793: INFO: The status of Pod netserver-0 is Running (Ready = false) Oct 5 11:16:46.794: INFO: The status of Pod netserver-0 is Running (Ready = true) Oct 5 11:16:46.804: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Oct 5 11:16:50.842: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.158:8080/dial?request=hostname&protocol=udp&host=10.244.2.142&port=8081&tries=1'] Namespace:pod-network-test-7014 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:16:50.842: INFO: >>> kubeConfig: /root/.kube/config I1005 11:16:50.952437 10 log.go:181] (0xa54fb90) (0xa54fc70) Create stream I1005 11:16:50.952584 10 log.go:181] (0xa54fb90) (0xa54fc70) Stream added, broadcasting: 1 I1005 11:16:50.956824 10 log.go:181] (0xa54fb90) Reply frame received for 1 I1005 11:16:50.957075 10 log.go:181] (0xa54fb90) (0x8167c00) Create stream I1005 11:16:50.957158 10 log.go:181] (0xa54fb90) (0x8167c00) Stream added, broadcasting: 3 I1005 11:16:50.958650 10 log.go:181] (0xa54fb90) Reply frame received for 3 I1005 11:16:50.958779 10 log.go:181] (0xa54fb90) (0x8a840e0) Create stream I1005 11:16:50.958839 10 log.go:181] (0xa54fb90) (0x8a840e0) Stream added, broadcasting: 5 I1005 11:16:50.960159 10 log.go:181] (0xa54fb90) Reply frame received for 5 I1005 11:16:51.045583 10 log.go:181] (0xa54fb90) Data frame received for 3 I1005 11:16:51.045812 10 log.go:181] (0x8167c00) (3) Data frame handling I1005 11:16:51.046004 10 log.go:181] (0x8167c00) (3) Data frame sent I1005 11:16:51.046159 10 log.go:181] (0xa54fb90) Data frame received for 5 I1005 11:16:51.046341 10 log.go:181] (0x8a840e0) (5) Data frame handling I1005 11:16:51.046488 10 log.go:181] (0xa54fb90) Data frame received for 3 I1005 11:16:51.046685 10 log.go:181] (0x8167c00) (3) Data frame handling I1005 11:16:51.049470 10 log.go:181] (0xa54fb90) Data frame received for 1 I1005 11:16:51.049581 10 log.go:181] (0xa54fc70) (1) Data frame handling I1005 11:16:51.049702 10 log.go:181] (0xa54fc70) (1) Data frame sent I1005 11:16:51.049841 10 log.go:181] (0xa54fb90) (0xa54fc70) Stream removed, broadcasting: 1 I1005 11:16:51.049982 10 log.go:181] (0xa54fb90) Go away received I1005 11:16:51.050411 10 log.go:181] (0xa54fb90) (0xa54fc70) Stream removed, broadcasting: 1 I1005 11:16:51.050590 10 log.go:181] (0xa54fb90) (0x8167c00) Stream removed, broadcasting: 3 I1005 11:16:51.050718 10 log.go:181] (0xa54fb90) (0x8a840e0) Stream removed, broadcasting: 5 Oct 5 11:16:51.050: INFO: Waiting for responses: map[] Oct 5 11:16:51.056: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.158:8080/dial?request=hostname&protocol=udp&host=10.244.1.157&port=8081&tries=1'] Namespace:pod-network-test-7014 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:16:51.057: INFO: >>> kubeConfig: /root/.kube/config I1005 11:16:51.158778 10 log.go:181] (0x8758700) (0x87589a0) Create stream I1005 11:16:51.158915 10 log.go:181] (0x8758700) (0x87589a0) Stream added, broadcasting: 1 I1005 11:16:51.163540 10 log.go:181] (0x8758700) Reply frame received for 1 I1005 11:16:51.163786 10 log.go:181] (0x8758700) (0xa638d90) Create stream I1005 11:16:51.163907 10 log.go:181] (0x8758700) (0xa638d90) Stream added, broadcasting: 3 I1005 11:16:51.166935 10 log.go:181] (0x8758700) Reply frame received for 3 I1005 11:16:51.167066 10 log.go:181] (0x8758700) (0x8759570) Create stream I1005 11:16:51.167124 10 log.go:181] (0x8758700) (0x8759570) Stream added, broadcasting: 5 I1005 11:16:51.168698 10 log.go:181] (0x8758700) Reply frame received for 5 I1005 11:16:51.237679 10 log.go:181] (0x8758700) Data frame received for 3 I1005 11:16:51.237951 10 log.go:181] (0xa638d90) (3) Data frame handling I1005 11:16:51.238117 10 log.go:181] (0x8758700) Data frame received for 5 I1005 11:16:51.238360 10 log.go:181] (0x8759570) (5) Data frame handling I1005 11:16:51.238549 10 log.go:181] (0xa638d90) (3) Data frame sent I1005 11:16:51.238721 10 log.go:181] (0x8758700) Data frame received for 3 I1005 11:16:51.238842 10 log.go:181] (0xa638d90) (3) Data frame handling I1005 11:16:51.239565 10 log.go:181] (0x8758700) Data frame received for 1 I1005 11:16:51.239746 10 log.go:181] (0x87589a0) (1) Data frame handling I1005 11:16:51.239890 10 log.go:181] (0x87589a0) (1) Data frame sent I1005 11:16:51.240059 10 log.go:181] (0x8758700) (0x87589a0) Stream removed, broadcasting: 1 I1005 11:16:51.240285 10 log.go:181] (0x8758700) Go away received I1005 11:16:51.240819 10 log.go:181] (0x8758700) (0x87589a0) Stream removed, broadcasting: 1 I1005 11:16:51.241130 10 log.go:181] (0x8758700) (0xa638d90) Stream removed, broadcasting: 3 I1005 11:16:51.241329 10 log.go:181] (0x8758700) (0x8759570) Stream removed, broadcasting: 5 Oct 5 11:16:51.241: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:16:51.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7014" for this suite. • [SLOW TEST:22.675 seconds] [sig-network] Networking /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":303,"completed":246,"skipped":4024,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:16:51.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should find a service from listing all namespaces [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: fetching services [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:16:51.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5200" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":303,"completed":247,"skipped":4034,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:16:51.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Oct 5 11:16:51.424: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Oct 5 11:16:51.455: INFO: Waiting for terminating namespaces to be deleted... Oct 5 11:16:51.461: INFO: Logging pods the apiserver thinks is on node kali-worker before test Oct 5 11:16:51.469: INFO: kindnet-pdv4j from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 11:16:51.469: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 11:16:51.469: INFO: kube-proxy-qsqz8 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 11:16:51.469: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 11:16:51.470: INFO: netserver-0 from pod-network-test-7014 started at 2020-10-05 11:16:28 +0000 UTC (1 container statuses recorded) Oct 5 11:16:51.470: INFO: Container webserver ready: true, restart count 0 Oct 5 11:16:51.470: INFO: Logging pods the apiserver thinks is on node kali-worker2 before test Oct 5 11:16:51.495: INFO: kindnet-pgjc7 from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 11:16:51.495: INFO: Container kindnet-cni ready: true, restart count 0 Oct 5 11:16:51.495: INFO: kube-proxy-qhsmg from kube-system started at 2020-09-23 08:29:08 +0000 UTC (1 container statuses recorded) Oct 5 11:16:51.495: INFO: Container kube-proxy ready: true, restart count 0 Oct 5 11:16:51.495: INFO: netserver-1 from pod-network-test-7014 started at 2020-10-05 11:16:28 +0000 UTC (1 container statuses recorded) Oct 5 11:16:51.495: INFO: Container webserver ready: true, restart count 0 Oct 5 11:16:51.495: INFO: test-container-pod from pod-network-test-7014 started at 2020-10-05 11:16:46 +0000 UTC (1 container statuses recorded) Oct 5 11:16:51.495: INFO: Container webserver ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 Oct 5 11:16:51.620: INFO: Pod kindnet-pdv4j requesting resource cpu=100m on Node kali-worker Oct 5 11:16:51.621: INFO: Pod kindnet-pgjc7 requesting resource cpu=100m on Node kali-worker2 Oct 5 11:16:51.621: INFO: Pod kube-proxy-qhsmg requesting resource cpu=0m on Node kali-worker2 Oct 5 11:16:51.621: INFO: Pod kube-proxy-qsqz8 requesting resource cpu=0m on Node kali-worker Oct 5 11:16:51.621: INFO: Pod netserver-0 requesting resource cpu=0m on Node kali-worker Oct 5 11:16:51.621: INFO: Pod netserver-1 requesting resource cpu=0m on Node kali-worker2 Oct 5 11:16:51.621: INFO: Pod test-container-pod requesting resource cpu=0m on Node kali-worker2 STEP: Starting Pods to consume most of the cluster CPU. Oct 5 11:16:51.621: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker Oct 5 11:16:51.632: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-0f4792b0-8903-4c7d-98bc-8ed80e14f091.163b147bdf0f4834], Reason = [Created], Message = [Created container filler-pod-0f4792b0-8903-4c7d-98bc-8ed80e14f091] STEP: Considering event: Type = [Normal], Name = [filler-pod-7cd4d453-ac1f-42a1-b43f-0ec148cb7b96.163b147bfb49d8f9], Reason = [Created], Message = [Created container filler-pod-7cd4d453-ac1f-42a1-b43f-0ec148cb7b96] STEP: Considering event: Type = [Normal], Name = [filler-pod-7cd4d453-ac1f-42a1-b43f-0ec148cb7b96.163b147b29668d1c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7995/filler-pod-7cd4d453-ac1f-42a1-b43f-0ec148cb7b96 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-7cd4d453-ac1f-42a1-b43f-0ec148cb7b96.163b147bb59a1e7c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-0f4792b0-8903-4c7d-98bc-8ed80e14f091.163b147b7889a5e8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-7cd4d453-ac1f-42a1-b43f-0ec148cb7b96.163b147c09faf849], Reason = [Started], Message = [Started container filler-pod-7cd4d453-ac1f-42a1-b43f-0ec148cb7b96] STEP: Considering event: Type = [Normal], Name = [filler-pod-0f4792b0-8903-4c7d-98bc-8ed80e14f091.163b147b278f4e5b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7995/filler-pod-0f4792b0-8903-4c7d-98bc-8ed80e14f091 to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-0f4792b0-8903-4c7d-98bc-8ed80e14f091.163b147bf86cbc06], Reason = [Started], Message = [Started container filler-pod-0f4792b0-8903-4c7d-98bc-8ed80e14f091] STEP: Considering event: Type = [Warning], Name = [additional-pod.163b147ca2b8414f], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.163b147ca5a1416c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:16:59.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7995" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 • [SLOW TEST:7.825 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":303,"completed":248,"skipped":4039,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:16:59.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-map-164244f0-112b-4c7e-b5bc-be3eb8e51405 STEP: Creating a pod to test consume configMaps Oct 5 11:16:59.341: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a9dccfdd-59d7-4695-a01f-6872163c2d5a" in namespace "projected-4694" to be "Succeeded or Failed" Oct 5 11:16:59.344: INFO: Pod "pod-projected-configmaps-a9dccfdd-59d7-4695-a01f-6872163c2d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.397252ms Oct 5 11:17:01.396: INFO: Pod "pod-projected-configmaps-a9dccfdd-59d7-4695-a01f-6872163c2d5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055621069s Oct 5 11:17:03.404: INFO: Pod "pod-projected-configmaps-a9dccfdd-59d7-4695-a01f-6872163c2d5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063659869s STEP: Saw pod success Oct 5 11:17:03.405: INFO: Pod "pod-projected-configmaps-a9dccfdd-59d7-4695-a01f-6872163c2d5a" satisfied condition "Succeeded or Failed" Oct 5 11:17:03.411: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-a9dccfdd-59d7-4695-a01f-6872163c2d5a container projected-configmap-volume-test: STEP: delete the pod Oct 5 11:17:03.598: INFO: Waiting for pod pod-projected-configmaps-a9dccfdd-59d7-4695-a01f-6872163c2d5a to disappear Oct 5 11:17:03.621: INFO: Pod pod-projected-configmaps-a9dccfdd-59d7-4695-a01f-6872163c2d5a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:17:03.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4694" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":249,"skipped":4050,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:17:03.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 11:17:13.898: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 11:17:15.920: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493433, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493433, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493433, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493433, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 11:17:18.987: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:17:31.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3277" for this suite. STEP: Destroying namespace "webhook-3277-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:27.717 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":303,"completed":250,"skipped":4051,"failed":0} SSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:17:31.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:17:35.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8853" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":303,"completed":251,"skipped":4054,"failed":0} ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:17:35.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1005 11:17:36.688282 10 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Oct 5 11:18:39.007: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:18:39.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8590" for this suite. • [SLOW TEST:63.295 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":303,"completed":252,"skipped":4054,"failed":0} SS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:18:39.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating replication controller my-hostname-basic-d8c2970b-8363-47d7-8c58-6d6df32491a8 Oct 5 11:18:39.154: INFO: Pod name my-hostname-basic-d8c2970b-8363-47d7-8c58-6d6df32491a8: Found 0 pods out of 1 Oct 5 11:18:44.161: INFO: Pod name my-hostname-basic-d8c2970b-8363-47d7-8c58-6d6df32491a8: Found 1 pods out of 1 Oct 5 11:18:44.161: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-d8c2970b-8363-47d7-8c58-6d6df32491a8" are running Oct 5 11:18:44.166: INFO: Pod "my-hostname-basic-d8c2970b-8363-47d7-8c58-6d6df32491a8-ld9qn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 11:18:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 11:18:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 11:18:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-10-05 11:18:39 +0000 UTC Reason: Message:}]) Oct 5 11:18:44.168: INFO: Trying to dial the pod Oct 5 11:18:49.187: INFO: Controller my-hostname-basic-d8c2970b-8363-47d7-8c58-6d6df32491a8: Got expected result from replica 1 [my-hostname-basic-d8c2970b-8363-47d7-8c58-6d6df32491a8-ld9qn]: "my-hostname-basic-d8c2970b-8363-47d7-8c58-6d6df32491a8-ld9qn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:18:49.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-633" for this suite. • [SLOW TEST:10.179 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":303,"completed":253,"skipped":4056,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:18:49.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-b1089e0d-2f76-4084-9d9e-ef684518e11e STEP: Creating a pod to test consume configMaps Oct 5 11:18:49.321: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7533c3b-8af4-4356-9190-9089da48a292" in namespace "configmap-4581" to be "Succeeded or Failed" Oct 5 11:18:49.348: INFO: Pod "pod-configmaps-b7533c3b-8af4-4356-9190-9089da48a292": Phase="Pending", Reason="", readiness=false. Elapsed: 26.659462ms Oct 5 11:18:51.355: INFO: Pod "pod-configmaps-b7533c3b-8af4-4356-9190-9089da48a292": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033809613s Oct 5 11:18:53.362: INFO: Pod "pod-configmaps-b7533c3b-8af4-4356-9190-9089da48a292": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040956974s STEP: Saw pod success Oct 5 11:18:53.363: INFO: Pod "pod-configmaps-b7533c3b-8af4-4356-9190-9089da48a292" satisfied condition "Succeeded or Failed" Oct 5 11:18:53.368: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-b7533c3b-8af4-4356-9190-9089da48a292 container configmap-volume-test: STEP: delete the pod Oct 5 11:18:53.427: INFO: Waiting for pod pod-configmaps-b7533c3b-8af4-4356-9190-9089da48a292 to disappear Oct 5 11:18:53.431: INFO: Pod pod-configmaps-b7533c3b-8af4-4356-9190-9089da48a292 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:18:53.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4581" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":303,"completed":254,"skipped":4069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:18:53.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 11:19:04.590: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 11:19:06.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493544, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493544, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493544, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493544, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 11:19:09.804: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:19:09.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-898-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:19:10.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5838" for this suite. STEP: Destroying namespace "webhook-5838-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.577 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":303,"completed":255,"skipped":4107,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:19:11.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod test-webserver-bbea0cdf-a514-4c47-b6f8-6e23f7206ea4 in namespace container-probe-5683 Oct 5 11:19:15.187: INFO: Started pod test-webserver-bbea0cdf-a514-4c47-b6f8-6e23f7206ea4 in namespace container-probe-5683 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 11:19:15.192: INFO: Initial restart count of pod test-webserver-bbea0cdf-a514-4c47-b6f8-6e23f7206ea4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:23:16.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5683" for this suite. • [SLOW TEST:245.175 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":303,"completed":256,"skipped":4115,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:23:16.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test emptydir volume type on tmpfs Oct 5 11:23:16.484: INFO: Waiting up to 5m0s for pod "pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa" in namespace "emptydir-4527" to be "Succeeded or Failed" Oct 5 11:23:16.691: INFO: Pod "pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa": Phase="Pending", Reason="", readiness=false. Elapsed: 206.587378ms Oct 5 11:23:18.698: INFO: Pod "pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213983104s Oct 5 11:23:20.708: INFO: Pod "pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa": Phase="Running", Reason="", readiness=true. Elapsed: 4.223352802s Oct 5 11:23:22.716: INFO: Pod "pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.231313029s STEP: Saw pod success Oct 5 11:23:22.716: INFO: Pod "pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa" satisfied condition "Succeeded or Failed" Oct 5 11:23:22.722: INFO: Trying to get logs from node kali-worker2 pod pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa container test-container: STEP: delete the pod Oct 5 11:23:22.770: INFO: Waiting for pod pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa to disappear Oct 5 11:23:22.784: INFO: Pod pod-3c08a5eb-793d-4f30-9544-5ef4bda578aa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:23:22.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4527" for this suite. • [SLOW TEST:6.579 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":257,"skipped":4128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:23:22.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating service in namespace services-7424 STEP: creating service affinity-nodeport-transition in namespace services-7424 STEP: creating replication controller affinity-nodeport-transition in namespace services-7424 I1005 11:23:23.018531 10 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-7424, replica count: 3 I1005 11:23:26.069973 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:23:29.070978 10 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 11:23:29.091: INFO: Creating new exec pod Oct 5 11:23:34.127: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7424 execpod-affinity6v5r9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Oct 5 11:23:35.614: INFO: stderr: "I1005 11:23:35.484354 3860 log.go:181] (0x2a9e2a0) (0x2a9e310) Create stream\nI1005 11:23:35.489132 3860 log.go:181] (0x2a9e2a0) (0x2a9e310) Stream added, broadcasting: 1\nI1005 11:23:35.499939 3860 log.go:181] (0x2a9e2a0) Reply frame received for 1\nI1005 11:23:35.500473 3860 log.go:181] (0x2a9e2a0) (0x2f28070) Create stream\nI1005 11:23:35.500546 3860 log.go:181] (0x2a9e2a0) (0x2f28070) Stream added, broadcasting: 3\nI1005 11:23:35.502132 3860 log.go:181] (0x2a9e2a0) Reply frame received for 3\nI1005 11:23:35.502452 3860 log.go:181] (0x2a9e2a0) (0x2512e00) Create stream\nI1005 11:23:35.502560 3860 log.go:181] (0x2a9e2a0) (0x2512e00) Stream added, broadcasting: 5\nI1005 11:23:35.503995 3860 log.go:181] (0x2a9e2a0) Reply frame received for 5\nI1005 11:23:35.594955 3860 log.go:181] (0x2a9e2a0) Data frame received for 5\nI1005 11:23:35.595223 3860 log.go:181] (0x2512e00) (5) Data frame handling\nI1005 11:23:35.595546 3860 log.go:181] (0x2a9e2a0) Data frame received for 3\n+ nc -zv -t -w 2 affinity-nodeport-transition 80\nI1005 11:23:35.595946 3860 log.go:181] (0x2f28070) (3) Data frame handling\nI1005 11:23:35.596203 3860 log.go:181] (0x2512e00) (5) Data frame sent\nI1005 11:23:35.596536 3860 log.go:181] (0x2a9e2a0) Data frame received for 1\nI1005 11:23:35.596683 3860 log.go:181] (0x2a9e310) (1) Data frame handling\nI1005 11:23:35.596958 3860 log.go:181] (0x2a9e310) (1) Data frame sent\nI1005 11:23:35.597109 3860 log.go:181] (0x2a9e2a0) Data frame received for 5\nI1005 11:23:35.597225 3860 log.go:181] (0x2512e00) (5) Data frame handling\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\nI1005 11:23:35.597402 3860 log.go:181] (0x2512e00) (5) Data frame sent\nI1005 11:23:35.597791 3860 log.go:181] (0x2a9e2a0) Data frame received for 5\nI1005 11:23:35.597888 3860 log.go:181] (0x2512e00) (5) Data frame handling\nI1005 11:23:35.598738 3860 log.go:181] (0x2a9e2a0) (0x2a9e310) Stream removed, broadcasting: 1\nI1005 11:23:35.601592 3860 log.go:181] (0x2a9e2a0) Go away received\nI1005 11:23:35.605204 3860 log.go:181] (0x2a9e2a0) (0x2a9e310) Stream removed, broadcasting: 1\nI1005 11:23:35.605361 3860 log.go:181] (0x2a9e2a0) (0x2f28070) Stream removed, broadcasting: 3\nI1005 11:23:35.605485 3860 log.go:181] (0x2a9e2a0) (0x2512e00) Stream removed, broadcasting: 5\n" Oct 5 11:23:35.615: INFO: stdout: "" Oct 5 11:23:35.621: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7424 execpod-affinity6v5r9 -- /bin/sh -x -c nc -zv -t -w 2 10.108.115.83 80' Oct 5 11:23:37.102: INFO: stderr: "I1005 11:23:36.957059 3880 log.go:181] (0x27b7c00) (0x27b7e30) Create stream\nI1005 11:23:36.960007 3880 log.go:181] (0x27b7c00) (0x27b7e30) Stream added, broadcasting: 1\nI1005 11:23:37.000336 3880 log.go:181] (0x27b7c00) Reply frame received for 1\nI1005 11:23:37.000766 3880 log.go:181] (0x27b7c00) (0x251c380) Create stream\nI1005 11:23:37.000829 3880 log.go:181] (0x27b7c00) (0x251c380) Stream added, broadcasting: 3\nI1005 11:23:37.002089 3880 log.go:181] (0x27b7c00) Reply frame received for 3\nI1005 11:23:37.002321 3880 log.go:181] (0x27b7c00) (0x3028070) Create stream\nI1005 11:23:37.002391 3880 log.go:181] (0x27b7c00) (0x3028070) Stream added, broadcasting: 5\nI1005 11:23:37.003361 3880 log.go:181] (0x27b7c00) Reply frame received for 5\nI1005 11:23:37.083224 3880 log.go:181] (0x27b7c00) Data frame received for 3\nI1005 11:23:37.083984 3880 log.go:181] (0x27b7c00) Data frame received for 5\nI1005 11:23:37.084260 3880 log.go:181] (0x3028070) (5) Data frame handling\nI1005 11:23:37.084580 3880 log.go:181] (0x27b7c00) Data frame received for 1\nI1005 11:23:37.084787 3880 log.go:181] (0x27b7e30) (1) Data frame handling\nI1005 11:23:37.085028 3880 log.go:181] (0x251c380) (3) Data frame handling\nI1005 11:23:37.085826 3880 log.go:181] (0x27b7e30) (1) Data frame sent\nI1005 11:23:37.086458 3880 log.go:181] (0x3028070) (5) Data frame sent\nI1005 11:23:37.086580 3880 log.go:181] (0x27b7c00) Data frame received for 5\nI1005 11:23:37.086714 3880 log.go:181] (0x3028070) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.115.83 80\nConnection to 10.108.115.83 80 port [tcp/http] succeeded!\nI1005 11:23:37.087665 3880 log.go:181] (0x27b7c00) (0x27b7e30) Stream removed, broadcasting: 1\nI1005 11:23:37.090952 3880 log.go:181] (0x27b7c00) Go away received\nI1005 11:23:37.093532 3880 log.go:181] (0x27b7c00) (0x27b7e30) Stream removed, broadcasting: 1\nI1005 11:23:37.093706 3880 log.go:181] (0x27b7c00) (0x251c380) Stream removed, broadcasting: 3\nI1005 11:23:37.093872 3880 log.go:181] (0x27b7c00) (0x3028070) Stream removed, broadcasting: 5\n" Oct 5 11:23:37.103: INFO: stdout: "" Oct 5 11:23:37.104: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7424 execpod-affinity6v5r9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.12 32447' Oct 5 11:23:38.696: INFO: stderr: "I1005 11:23:38.549436 3900 log.go:181] (0x2e1d5e0) (0x2e1d650) Create stream\nI1005 11:23:38.553335 3900 log.go:181] (0x2e1d5e0) (0x2e1d650) Stream added, broadcasting: 1\nI1005 11:23:38.563228 3900 log.go:181] (0x2e1d5e0) Reply frame received for 1\nI1005 11:23:38.563943 3900 log.go:181] (0x2e1d5e0) (0x29c0150) Create stream\nI1005 11:23:38.564045 3900 log.go:181] (0x2e1d5e0) (0x29c0150) Stream added, broadcasting: 3\nI1005 11:23:38.565862 3900 log.go:181] (0x2e1d5e0) Reply frame received for 3\nI1005 11:23:38.566138 3900 log.go:181] (0x2e1d5e0) (0x2733880) Create stream\nI1005 11:23:38.566212 3900 log.go:181] (0x2e1d5e0) (0x2733880) Stream added, broadcasting: 5\nI1005 11:23:38.567656 3900 log.go:181] (0x2e1d5e0) Reply frame received for 5\nI1005 11:23:38.676711 3900 log.go:181] (0x2e1d5e0) Data frame received for 3\nI1005 11:23:38.678787 3900 log.go:181] (0x2e1d5e0) Data frame received for 5\nI1005 11:23:38.678950 3900 log.go:181] (0x2733880) (5) Data frame handling\nI1005 11:23:38.679303 3900 log.go:181] (0x2e1d5e0) Data frame received for 1\nI1005 11:23:38.679466 3900 log.go:181] (0x2e1d650) (1) Data frame handling\nI1005 11:23:38.679699 3900 log.go:181] (0x29c0150) (3) Data frame handling\nI1005 11:23:38.680458 3900 log.go:181] (0x2733880) (5) Data frame sent\nI1005 11:23:38.681407 3900 log.go:181] (0x2e1d650) (1) Data frame sent\nI1005 11:23:38.681883 3900 log.go:181] (0x2e1d5e0) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.12 32447\nConnection to 172.18.0.12 32447 port [tcp/32447] succeeded!\nI1005 11:23:38.681987 3900 log.go:181] (0x2733880) (5) Data frame handling\nI1005 11:23:38.682893 3900 log.go:181] (0x2e1d5e0) (0x2e1d650) Stream removed, broadcasting: 1\nI1005 11:23:38.683638 3900 log.go:181] (0x2e1d5e0) Go away received\nI1005 11:23:38.686360 3900 log.go:181] (0x2e1d5e0) (0x2e1d650) Stream removed, broadcasting: 1\nI1005 11:23:38.686572 3900 log.go:181] (0x2e1d5e0) (0x29c0150) Stream removed, broadcasting: 3\nI1005 11:23:38.686753 3900 log.go:181] (0x2e1d5e0) (0x2733880) Stream removed, broadcasting: 5\n" Oct 5 11:23:38.697: INFO: stdout: "" Oct 5 11:23:38.697: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7424 execpod-affinity6v5r9 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32447' Oct 5 11:23:40.150: INFO: stderr: "I1005 11:23:40.025772 3920 log.go:181] (0x2fb2150) (0x2fb21c0) Create stream\nI1005 11:23:40.028756 3920 log.go:181] (0x2fb2150) (0x2fb21c0) Stream added, broadcasting: 1\nI1005 11:23:40.039045 3920 log.go:181] (0x2fb2150) Reply frame received for 1\nI1005 11:23:40.039863 3920 log.go:181] (0x2fb2150) (0x3128070) Create stream\nI1005 11:23:40.039987 3920 log.go:181] (0x2fb2150) (0x3128070) Stream added, broadcasting: 3\nI1005 11:23:40.041977 3920 log.go:181] (0x2fb2150) Reply frame received for 3\nI1005 11:23:40.042442 3920 log.go:181] (0x2fb2150) (0x2fb2380) Create stream\nI1005 11:23:40.042554 3920 log.go:181] (0x2fb2150) (0x2fb2380) Stream added, broadcasting: 5\nI1005 11:23:40.044053 3920 log.go:181] (0x2fb2150) Reply frame received for 5\nI1005 11:23:40.122144 3920 log.go:181] (0x2fb2150) Data frame received for 5\nI1005 11:23:40.122425 3920 log.go:181] (0x2fb2380) (5) Data frame handling\nI1005 11:23:40.122794 3920 log.go:181] (0x2fb2150) Data frame received for 3\nI1005 11:23:40.123034 3920 log.go:181] (0x3128070) (3) Data frame handling\nI1005 11:23:40.123209 3920 log.go:181] (0x2fb2380) (5) Data frame sent\nI1005 11:23:40.123424 3920 log.go:181] (0x2fb2150) Data frame received for 1\nI1005 11:23:40.123544 3920 log.go:181] (0x2fb21c0) (1) Data frame handling\nI1005 11:23:40.123667 3920 log.go:181] (0x2fb21c0) (1) Data frame sent\nI1005 11:23:40.123938 3920 log.go:181] (0x2fb2150) Data frame received for 5\nI1005 11:23:40.124003 3920 log.go:181] (0x2fb2380) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32447\nConnection to 172.18.0.13 32447 port [tcp/32447] succeeded!\nI1005 11:23:40.126236 3920 log.go:181] (0x2fb2150) (0x2fb21c0) Stream removed, broadcasting: 1\nI1005 11:23:40.127020 3920 log.go:181] (0x2fb2150) Go away received\nI1005 11:23:40.142669 3920 log.go:181] (0x2fb2150) (0x2fb21c0) Stream removed, broadcasting: 1\nI1005 11:23:40.142874 3920 log.go:181] (0x2fb2150) (0x3128070) Stream removed, broadcasting: 3\nI1005 11:23:40.143007 3920 log.go:181] (0x2fb2150) (0x2fb2380) Stream removed, broadcasting: 5\n" Oct 5 11:23:40.150: INFO: stdout: "" Oct 5 11:23:40.174: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7424 execpod-affinity6v5r9 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:32447/ ; done' Oct 5 11:23:41.823: INFO: stderr: "I1005 11:23:41.602113 3941 log.go:181] (0x2c6b500) (0x2c6b570) Create stream\nI1005 11:23:41.604477 3941 log.go:181] (0x2c6b500) (0x2c6b570) Stream added, broadcasting: 1\nI1005 11:23:41.615696 3941 log.go:181] (0x2c6b500) Reply frame received for 1\nI1005 11:23:41.616381 3941 log.go:181] (0x2c6b500) (0x2d20150) Create stream\nI1005 11:23:41.616472 3941 log.go:181] (0x2c6b500) (0x2d20150) Stream added, broadcasting: 3\nI1005 11:23:41.617911 3941 log.go:181] (0x2c6b500) Reply frame received for 3\nI1005 11:23:41.618119 3941 log.go:181] (0x2c6b500) (0x2d20380) Create stream\nI1005 11:23:41.618177 3941 log.go:181] (0x2c6b500) (0x2d20380) Stream added, broadcasting: 5\nI1005 11:23:41.619489 3941 log.go:181] (0x2c6b500) Reply frame received for 5\nI1005 11:23:41.706979 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.707258 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.707489 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.707618 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.707767 3941 log.go:181] (0x2d20380) (5) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.708117 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.713544 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.713635 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.713736 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.714297 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.714434 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.714571 3941 log.go:181] (0x2d20380) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.714817 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.714938 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.715055 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.719597 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.719714 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.719802 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.720452 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.720559 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\nI1005 11:23:41.720715 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.720970 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.721047 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.721134 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.721213 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.721342 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.721479 3941 log.go:181] (0x2d20380) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.726718 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.726884 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.727018 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.727319 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.727426 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.727575 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.727670 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -sI1005 11:23:41.727776 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.727897 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.727969 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.728025 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.728115 3941 log.go:181] (0x2d20380) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.730942 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.731099 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.731264 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.731525 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.731715 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\nI1005 11:23:41.731899 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.732010 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.732113 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.732315 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.732480 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.732635 3941 log.go:181] (0x2d20380) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.732790 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.738066 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.738155 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.738283 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.738563 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.738677 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.738770 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.738875 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.738953 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.739023 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.743760 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.743831 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.743913 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.744542 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.744665 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.744760 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.744961 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.745093 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.745225 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.748650 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.748754 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.748891 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.749395 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.749509 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.749693 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.749843 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.749930 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.750013 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.754353 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.754444 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.754553 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.755389 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.755569 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.755757 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.755962 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.756076 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.756181 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.760448 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.760554 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.760664 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.761349 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.761438 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.761533 3941 log.go:181] (0x2d20380) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/I1005 11:23:41.761620 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.761742 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.761815 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.761958 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.762070 3941 log.go:181] (0x2d20380) (5) Data frame sent\n\nI1005 11:23:41.762180 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.765722 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.765873 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.766011 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.766740 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.766863 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.767005 3941 log.go:181] (0x2d20380) (5) Data frame sent\n+ echo\nI1005 11:23:41.767113 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.767225 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.767316 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.767424 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.767574 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.767743 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.773846 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.773962 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.774150 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.774864 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.774945 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2I1005 11:23:41.775030 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.775200 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.775383 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.775501 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.775609 3941 log.go:181] (0x2d20380) (5) Data frame handling\n http://172.18.0.12:32447/\nI1005 11:23:41.775734 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.775885 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.782916 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.783013 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.783135 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.783265 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.783387 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -sI1005 11:23:41.783495 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.783873 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.783973 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.784116 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.784229 3941 log.go:181] (0x2d20380) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.784424 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.784574 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.787393 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.787519 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.787682 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.787814 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.787946 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.788076 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.788155 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.788247 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.788331 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.793405 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.793478 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.793553 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.794036 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.794125 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.794214 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.794363 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.794472 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.794547 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.799066 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.799169 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.799270 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.799569 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.799729 3941 log.go:181] (0x2d20380) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:41.799845 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.799976 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.800088 3941 log.go:181] (0x2d20380) (5) Data frame sent\nI1005 11:23:41.800226 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.803949 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.804121 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.804311 3941 log.go:181] (0x2d20150) (3) Data frame sent\nI1005 11:23:41.804965 3941 log.go:181] (0x2c6b500) Data frame received for 3\nI1005 11:23:41.805083 3941 log.go:181] (0x2d20150) (3) Data frame handling\nI1005 11:23:41.805213 3941 log.go:181] (0x2c6b500) Data frame received for 5\nI1005 11:23:41.805328 3941 log.go:181] (0x2d20380) (5) Data frame handling\nI1005 11:23:41.806639 3941 log.go:181] (0x2c6b500) Data frame received for 1\nI1005 11:23:41.806766 3941 log.go:181] (0x2c6b570) (1) Data frame handling\nI1005 11:23:41.806896 3941 log.go:181] (0x2c6b570) (1) Data frame sent\nI1005 11:23:41.807551 3941 log.go:181] (0x2c6b500) (0x2c6b570) Stream removed, broadcasting: 1\nI1005 11:23:41.810014 3941 log.go:181] (0x2c6b500) Go away received\nI1005 11:23:41.813084 3941 log.go:181] (0x2c6b500) (0x2c6b570) Stream removed, broadcasting: 1\nI1005 11:23:41.813291 3941 log.go:181] (0x2c6b500) (0x2d20150) Stream removed, broadcasting: 3\nI1005 11:23:41.813455 3941 log.go:181] (0x2c6b500) (0x2d20380) Stream removed, broadcasting: 5\n" Oct 5 11:23:41.829: INFO: stdout: "\naffinity-nodeport-transition-5jf45\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-2qqfh\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-5jf45\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-2qqfh\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-5jf45\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-2qqfh" Oct 5 11:23:41.829: INFO: Received response from host: affinity-nodeport-transition-5jf45 Oct 5 11:23:41.829: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.829: INFO: Received response from host: affinity-nodeport-transition-2qqfh Oct 5 11:23:41.829: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.829: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.829: INFO: Received response from host: affinity-nodeport-transition-5jf45 Oct 5 11:23:41.829: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.829: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.830: INFO: Received response from host: affinity-nodeport-transition-2qqfh Oct 5 11:23:41.830: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.830: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.830: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.830: INFO: Received response from host: affinity-nodeport-transition-5jf45 Oct 5 11:23:41.830: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.830: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:41.830: INFO: Received response from host: affinity-nodeport-transition-2qqfh Oct 5 11:23:41.844: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config exec --namespace=services-7424 execpod-affinity6v5r9 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.12:32447/ ; done' Oct 5 11:23:43.468: INFO: stderr: "I1005 11:23:43.218296 3961 log.go:181] (0x3010070) (0x3010150) Create stream\nI1005 11:23:43.223042 3961 log.go:181] (0x3010070) (0x3010150) Stream added, broadcasting: 1\nI1005 11:23:43.235582 3961 log.go:181] (0x3010070) Reply frame received for 1\nI1005 11:23:43.235989 3961 log.go:181] (0x3010070) (0x28c1ea0) Create stream\nI1005 11:23:43.236048 3961 log.go:181] (0x3010070) (0x28c1ea0) Stream added, broadcasting: 3\nI1005 11:23:43.238155 3961 log.go:181] (0x3010070) Reply frame received for 3\nI1005 11:23:43.249622 3961 log.go:181] (0x3010070) (0x260a770) Create stream\nI1005 11:23:43.249759 3961 log.go:181] (0x3010070) (0x260a770) Stream added, broadcasting: 5\nI1005 11:23:43.251595 3961 log.go:181] (0x3010070) Reply frame received for 5\nI1005 11:23:43.343632 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.343949 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.344095 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.344295 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.344609 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.344765 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\n+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.347360 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.347467 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.347593 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.348329 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.348483 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.348618 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.348787 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.349003 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.349108 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.353387 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.353535 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.353660 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.353747 3961 log.go:181] (0x260a770) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.353832 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.353964 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.354032 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.354118 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.354195 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.360458 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.360607 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.360787 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.361304 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.361432 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.361596 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.361720 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.361892 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.362029 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.365470 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.365535 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.365617 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.366419 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.366514 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.366643 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.366746 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.366879 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.367047 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.372723 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.372784 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.372925 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.373677 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.373775 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.373902 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.373987 3961 log.go:181] (0x260a770) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.374072 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.374179 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.379054 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.379184 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.379361 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.380006 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.380119 3961 log.go:181] (0x260a770) (5) Data frame handling\n+ echo\n+ curl -q -sI1005 11:23:43.380252 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.380459 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.380665 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.380825 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.381099 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.381278 3961 log.go:181] (0x260a770) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.381467 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.385917 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.385997 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.386068 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.386851 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.386969 3961 log.go:181] (0x260a770) (5) Data frame handling\n+ echo\nI1005 11:23:43.387036 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.387317 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.387407 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.387485 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.387538 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.387616 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.387680 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.393664 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.393754 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.393862 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.394579 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.394715 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.394828 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.394939 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.395022 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.395113 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.399945 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.400040 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.400179 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.400710 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.400792 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.401002 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/I1005 11:23:43.401226 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.401315 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.401393 3961 log.go:181] (0x260a770) (5) Data frame sent\n\nI1005 11:23:43.401466 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.401532 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.401617 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.405631 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.405760 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.405870 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.406067 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.406168 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.406254 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.406394 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.406601 3961 log.go:181] (0x260a770) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.406825 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.413499 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.413623 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.413770 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.414336 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.414498 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.414622 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.414731 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.414816 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.414906 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.419966 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.420037 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.420138 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.421006 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.421176 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.421301 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.421536 3961 log.go:181] (0x260a770) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.421707 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.421828 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.426462 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.426528 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.426628 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.427659 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.427820 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.427947 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ echo\n+ curl -q -sI1005 11:23:43.428072 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.428181 3961 log.go:181] (0x260a770) (5) Data frame handling\n --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.428326 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.428495 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.428594 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.428731 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.431747 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.431867 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.432007 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.432740 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.432938 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.433046 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.433165 3961 log.go:181] (0x260a770) (5) Data frame handling\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.433273 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.433381 3961 log.go:181] (0x260a770) (5) Data frame sent\nI1005 11:23:43.440176 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.440280 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.440393 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.441208 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.441290 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.441387 3961 log.go:181] (0x260a770) (5) Data frame sent\n+ echo\n+ curl -q -sI1005 11:23:43.441874 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.442022 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.442156 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.442295 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.442387 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.442494 3961 log.go:181] (0x260a770) (5) Data frame sent\n --connect-timeout 2 http://172.18.0.12:32447/\nI1005 11:23:43.446062 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.446170 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.446348 3961 log.go:181] (0x28c1ea0) (3) Data frame sent\nI1005 11:23:43.447096 3961 log.go:181] (0x3010070) Data frame received for 3\nI1005 11:23:43.447187 3961 log.go:181] (0x28c1ea0) (3) Data frame handling\nI1005 11:23:43.447431 3961 log.go:181] (0x3010070) Data frame received for 5\nI1005 11:23:43.447593 3961 log.go:181] (0x260a770) (5) Data frame handling\nI1005 11:23:43.449298 3961 log.go:181] (0x3010070) Data frame received for 1\nI1005 11:23:43.449387 3961 log.go:181] (0x3010150) (1) Data frame handling\nI1005 11:23:43.449486 3961 log.go:181] (0x3010150) (1) Data frame sent\nI1005 11:23:43.450411 3961 log.go:181] (0x3010070) (0x3010150) Stream removed, broadcasting: 1\nI1005 11:23:43.452334 3961 log.go:181] (0x3010070) Go away received\nI1005 11:23:43.455760 3961 log.go:181] (0x3010070) (0x3010150) Stream removed, broadcasting: 1\nI1005 11:23:43.456148 3961 log.go:181] (0x3010070) (0x28c1ea0) Stream removed, broadcasting: 3\nI1005 11:23:43.456414 3961 log.go:181] (0x3010070) (0x260a770) Stream removed, broadcasting: 5\n" Oct 5 11:23:43.473: INFO: stdout: "\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj\naffinity-nodeport-transition-v7wsj" Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Received response from host: affinity-nodeport-transition-v7wsj Oct 5 11:23:43.474: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-7424, will wait for the garbage collector to delete the pods Oct 5 11:23:43.573: INFO: Deleting ReplicationController affinity-nodeport-transition took: 18.543898ms Oct 5 11:23:44.074: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 501.1789ms [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:23:58.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7424" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 • [SLOW TEST:35.445 seconds] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":303,"completed":258,"skipped":4165,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:23:58.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 5 11:23:58.326: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:24:05.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5353" for this suite. • [SLOW TEST:7.788 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":303,"completed":259,"skipped":4174,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:24:06.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:24:06.119: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Oct 5 11:24:26.840: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-75 create -f -' Oct 5 11:24:32.583: INFO: stderr: "" Oct 5 11:24:32.584: INFO: stdout: "e2e-test-crd-publish-openapi-6743-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 5 11:24:32.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-75 delete e2e-test-crd-publish-openapi-6743-crds test-cr' Oct 5 11:24:33.797: INFO: stderr: "" Oct 5 11:24:33.797: INFO: stdout: "e2e-test-crd-publish-openapi-6743-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Oct 5 11:24:33.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-75 apply -f -' Oct 5 11:24:36.300: INFO: stderr: "" Oct 5 11:24:36.300: INFO: stdout: "e2e-test-crd-publish-openapi-6743-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Oct 5 11:24:36.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-75 delete e2e-test-crd-publish-openapi-6743-crds test-cr' Oct 5 11:24:37.579: INFO: stderr: "" Oct 5 11:24:37.580: INFO: stdout: "e2e-test-crd-publish-openapi-6743-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Oct 5 11:24:37.580: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6743-crds' Oct 5 11:24:40.522: INFO: stderr: "" Oct 5 11:24:40.522: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6743-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:25:01.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-75" for this suite. • [SLOW TEST:55.078 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":303,"completed":260,"skipped":4207,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:25:01.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:25:01.199: INFO: Pod name rollover-pod: Found 0 pods out of 1 Oct 5 11:25:06.207: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 5 11:25:06.208: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Oct 5 11:25:08.214: INFO: Creating deployment "test-rollover-deployment" Oct 5 11:25:08.243: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Oct 5 11:25:10.268: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Oct 5 11:25:10.282: INFO: Ensure that both replica sets have 1 created replica Oct 5 11:25:10.292: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Oct 5 11:25:10.303: INFO: Updating deployment test-rollover-deployment Oct 5 11:25:10.303: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Oct 5 11:25:12.327: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Oct 5 11:25:12.341: INFO: Make sure deployment "test-rollover-deployment" is complete Oct 5 11:25:12.353: INFO: all replica sets need to contain the pod-template-hash label Oct 5 11:25:12.353: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493910, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:25:14.367: INFO: all replica sets need to contain the pod-template-hash label Oct 5 11:25:14.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493913, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:25:16.369: INFO: all replica sets need to contain the pod-template-hash label Oct 5 11:25:16.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493913, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:25:18.368: INFO: all replica sets need to contain the pod-template-hash label Oct 5 11:25:18.368: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493913, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:25:20.369: INFO: all replica sets need to contain the pod-template-hash label Oct 5 11:25:20.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493913, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:25:22.370: INFO: all replica sets need to contain the pod-template-hash label Oct 5 11:25:22.370: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493913, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737493908, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-5797c7764\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:25:24.369: INFO: Oct 5 11:25:24.369: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 11:25:24.384: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2477 /apis/apps/v1/namespaces/deployment-2477/deployments/test-rollover-deployment 0fa95fd7-b3dc-4575-b16f-f298c9de5c59 3182109 2 2020-10-05 11:25:08 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-10-05 11:25:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 11:25:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb9fa068 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-10-05 11:25:08 +0000 UTC,LastTransitionTime:2020-10-05 11:25:08 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-5797c7764" has successfully progressed.,LastUpdateTime:2020-10-05 11:25:23 +0000 UTC,LastTransitionTime:2020-10-05 11:25:08 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Oct 5 11:25:24.394: INFO: New ReplicaSet "test-rollover-deployment-5797c7764" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-5797c7764 deployment-2477 /apis/apps/v1/namespaces/deployment-2477/replicasets/test-rollover-deployment-5797c7764 ff11b4a9-9c5e-4ea9-9e99-5c609c2c3225 3182097 2 2020-10-05 11:25:10 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 0fa95fd7-b3dc-4575-b16f-f298c9de5c59 0xb9b0920 0xb9b0921}] [] [{kube-controller-manager Update apps/v1 2020-10-05 11:25:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fa95fd7-b3dc-4575-b16f-f298c9de5c59\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5797c7764,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb9b09a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 5 11:25:24.394: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Oct 5 11:25:24.395: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2477 /apis/apps/v1/namespaces/deployment-2477/replicasets/test-rollover-controller dcba13ae-b005-4c06-9e59-14dbc7e13ea1 3182107 2 2020-10-05 11:25:01 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 0fa95fd7-b3dc-4575-b16f-f298c9de5c59 0xb9b0807 0xb9b0808}] [] [{e2e.test Update apps/v1 2020-10-05 11:25:01 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 11:25:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fa95fd7-b3dc-4575-b16f-f298c9de5c59\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xb9b08b8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 11:25:24.397: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-2477 /apis/apps/v1/namespaces/deployment-2477/replicasets/test-rollover-deployment-78bc8b888c 08709f41-0ade-491b-a767-7a7d81906688 3182045 2 2020-10-05 11:25:08 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 0fa95fd7-b3dc-4575-b16f-f298c9de5c59 0xb9b0a17 0xb9b0a18}] [] [{kube-controller-manager Update apps/v1 2020-10-05 11:25:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0fa95fd7-b3dc-4575-b16f-f298c9de5c59\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xb9b0aa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 11:25:24.403: INFO: Pod "test-rollover-deployment-5797c7764-p82cc" is available: &Pod{ObjectMeta:{test-rollover-deployment-5797c7764-p82cc test-rollover-deployment-5797c7764- deployment-2477 /api/v1/namespaces/deployment-2477/pods/test-rollover-deployment-5797c7764-p82cc 7eeba879-580b-4801-8673-b25ad2736739 3182064 0 2020-10-05 11:25:10 +0000 UTC map[name:rollover-pod pod-template-hash:5797c7764] map[] [{apps/v1 ReplicaSet test-rollover-deployment-5797c7764 ff11b4a9-9c5e-4ea9-9e99-5c609c2c3225 0xb9b0fe0 0xb9b0fe1}] [] [{kube-controller-manager Update v1 2020-10-05 11:25:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ff11b4a9-9c5e-4ea9-9e99-5c609c2c3225\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 11:25:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.2.149\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-78rpd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-78rpd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-78rpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:25:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:25:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:25:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.12,PodIP:10.244.2.149,StartTime:2020-10-05 11:25:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 11:25:12 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:17e61a0b9e498b6c73ed97670906be3d5a3ae394739c1bd5b619e1a004885cf0,ContainerID:containerd://5605c0c113f928ae5072f72538fea5501530f399261799d3ec08070c849f2757,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:25:24.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2477" for this suite. • [SLOW TEST:23.294 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":303,"completed":261,"skipped":4212,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:25:24.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:25:35.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5226" for this suite. • [SLOW TEST:11.335 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":303,"completed":262,"skipped":4223,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:25:35.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath Oct 5 11:25:39.907: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-586 PodName:var-expansion-db5c0e4d-4792-4c32-a3ca-9ab4429ef114 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:25:39.908: INFO: >>> kubeConfig: /root/.kube/config I1005 11:25:40.019176 10 log.go:181] (0xa631420) (0xa631490) Create stream I1005 11:25:40.019356 10 log.go:181] (0xa631420) (0xa631490) Stream added, broadcasting: 1 I1005 11:25:40.023243 10 log.go:181] (0xa631420) Reply frame received for 1 I1005 11:25:40.023489 10 log.go:181] (0xa631420) (0xbc422a0) Create stream I1005 11:25:40.023598 10 log.go:181] (0xa631420) (0xbc422a0) Stream added, broadcasting: 3 I1005 11:25:40.025355 10 log.go:181] (0xa631420) Reply frame received for 3 I1005 11:25:40.025512 10 log.go:181] (0xa631420) (0xa631650) Create stream I1005 11:25:40.025593 10 log.go:181] (0xa631420) (0xa631650) Stream added, broadcasting: 5 I1005 11:25:40.027180 10 log.go:181] (0xa631420) Reply frame received for 5 I1005 11:25:40.086339 10 log.go:181] (0xa631420) Data frame received for 5 I1005 11:25:40.086567 10 log.go:181] (0xa631650) (5) Data frame handling I1005 11:25:40.086840 10 log.go:181] (0xa631420) Data frame received for 3 I1005 11:25:40.087193 10 log.go:181] (0xbc422a0) (3) Data frame handling I1005 11:25:40.087672 10 log.go:181] (0xa631420) Data frame received for 1 I1005 11:25:40.087855 10 log.go:181] (0xa631490) (1) Data frame handling I1005 11:25:40.088044 10 log.go:181] (0xa631490) (1) Data frame sent I1005 11:25:40.088210 10 log.go:181] (0xa631420) (0xa631490) Stream removed, broadcasting: 1 I1005 11:25:40.088464 10 log.go:181] (0xa631420) Go away received I1005 11:25:40.088789 10 log.go:181] (0xa631420) (0xa631490) Stream removed, broadcasting: 1 I1005 11:25:40.089003 10 log.go:181] (0xa631420) (0xbc422a0) Stream removed, broadcasting: 3 I1005 11:25:40.089106 10 log.go:181] (0xa631420) (0xa631650) Stream removed, broadcasting: 5 STEP: test for file in mounted path Oct 5 11:25:40.095: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-586 PodName:var-expansion-db5c0e4d-4792-4c32-a3ca-9ab4429ef114 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Oct 5 11:25:40.095: INFO: >>> kubeConfig: /root/.kube/config I1005 11:25:40.207607 10 log.go:181] (0xbc42a80) (0xbc42af0) Create stream I1005 11:25:40.207739 10 log.go:181] (0xbc42a80) (0xbc42af0) Stream added, broadcasting: 1 I1005 11:25:40.211975 10 log.go:181] (0xbc42a80) Reply frame received for 1 I1005 11:25:40.212345 10 log.go:181] (0xbc42a80) (0xa631810) Create stream I1005 11:25:40.212523 10 log.go:181] (0xbc42a80) (0xa631810) Stream added, broadcasting: 3 I1005 11:25:40.214832 10 log.go:181] (0xbc42a80) Reply frame received for 3 I1005 11:25:40.215016 10 log.go:181] (0xbc42a80) (0xa6319d0) Create stream I1005 11:25:40.215108 10 log.go:181] (0xbc42a80) (0xa6319d0) Stream added, broadcasting: 5 I1005 11:25:40.216552 10 log.go:181] (0xbc42a80) Reply frame received for 5 I1005 11:25:40.288240 10 log.go:181] (0xbc42a80) Data frame received for 3 I1005 11:25:40.288445 10 log.go:181] (0xa631810) (3) Data frame handling I1005 11:25:40.288576 10 log.go:181] (0xbc42a80) Data frame received for 5 I1005 11:25:40.288738 10 log.go:181] (0xa6319d0) (5) Data frame handling I1005 11:25:40.290109 10 log.go:181] (0xbc42a80) Data frame received for 1 I1005 11:25:40.290331 10 log.go:181] (0xbc42af0) (1) Data frame handling I1005 11:25:40.290571 10 log.go:181] (0xbc42af0) (1) Data frame sent I1005 11:25:40.290761 10 log.go:181] (0xbc42a80) (0xbc42af0) Stream removed, broadcasting: 1 I1005 11:25:40.290980 10 log.go:181] (0xbc42a80) Go away received I1005 11:25:40.291594 10 log.go:181] (0xbc42a80) (0xbc42af0) Stream removed, broadcasting: 1 I1005 11:25:40.291808 10 log.go:181] (0xbc42a80) (0xa631810) Stream removed, broadcasting: 3 I1005 11:25:40.291985 10 log.go:181] (0xbc42a80) (0xa6319d0) Stream removed, broadcasting: 5 STEP: updating the annotation value Oct 5 11:25:40.804: INFO: Successfully updated pod "var-expansion-db5c0e4d-4792-4c32-a3ca-9ab4429ef114" STEP: waiting for annotated pod running STEP: deleting the pod gracefully Oct 5 11:25:40.839: INFO: Deleting pod "var-expansion-db5c0e4d-4792-4c32-a3ca-9ab4429ef114" in namespace "var-expansion-586" Oct 5 11:25:40.847: INFO: Wait up to 5m0s for pod "var-expansion-db5c0e4d-4792-4c32-a3ca-9ab4429ef114" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:26:18.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-586" for this suite. • [SLOW TEST:43.181 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":303,"completed":263,"skipped":4239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:26:18.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod pod-subpath-test-downwardapi-vzjv STEP: Creating a pod to test atomic-volume-subpath Oct 5 11:26:19.048: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vzjv" in namespace "subpath-4785" to be "Succeeded or Failed" Oct 5 11:26:19.074: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Pending", Reason="", readiness=false. Elapsed: 25.305525ms Oct 5 11:26:21.130: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082047935s Oct 5 11:26:23.138: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 4.090161366s Oct 5 11:26:25.146: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 6.097494225s Oct 5 11:26:27.153: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 8.10474083s Oct 5 11:26:29.161: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 10.112791101s Oct 5 11:26:31.167: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 12.119179215s Oct 5 11:26:33.178: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 14.130086761s Oct 5 11:26:35.186: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 16.137323257s Oct 5 11:26:37.193: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 18.144300957s Oct 5 11:26:39.200: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 20.151799476s Oct 5 11:26:41.208: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Running", Reason="", readiness=true. Elapsed: 22.159462711s Oct 5 11:26:43.216: INFO: Pod "pod-subpath-test-downwardapi-vzjv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.167808001s STEP: Saw pod success Oct 5 11:26:43.217: INFO: Pod "pod-subpath-test-downwardapi-vzjv" satisfied condition "Succeeded or Failed" Oct 5 11:26:43.221: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-vzjv container test-container-subpath-downwardapi-vzjv: STEP: delete the pod Oct 5 11:26:43.395: INFO: Waiting for pod pod-subpath-test-downwardapi-vzjv to disappear Oct 5 11:26:43.399: INFO: Pod pod-subpath-test-downwardapi-vzjv no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vzjv Oct 5 11:26:43.400: INFO: Deleting pod "pod-subpath-test-downwardapi-vzjv" in namespace "subpath-4785" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:26:43.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4785" for this suite. • [SLOW TEST:24.473 seconds] [sig-storage] Subpath /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":303,"completed":264,"skipped":4303,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:26:43.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:26:43.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-235" for this suite. •{"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":303,"completed":265,"skipped":4310,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:26:43.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:26:49.192: INFO: Checking APIGroup: apiregistration.k8s.io Oct 5 11:26:49.194: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Oct 5 11:26:49.195: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.195: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Oct 5 11:26:49.195: INFO: Checking APIGroup: extensions Oct 5 11:26:49.197: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Oct 5 11:26:49.197: INFO: Versions found [{extensions/v1beta1 v1beta1}] Oct 5 11:26:49.197: INFO: extensions/v1beta1 matches extensions/v1beta1 Oct 5 11:26:49.197: INFO: Checking APIGroup: apps Oct 5 11:26:49.200: INFO: PreferredVersion.GroupVersion: apps/v1 Oct 5 11:26:49.200: INFO: Versions found [{apps/v1 v1}] Oct 5 11:26:49.200: INFO: apps/v1 matches apps/v1 Oct 5 11:26:49.200: INFO: Checking APIGroup: events.k8s.io Oct 5 11:26:49.202: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Oct 5 11:26:49.202: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.202: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Oct 5 11:26:49.202: INFO: Checking APIGroup: authentication.k8s.io Oct 5 11:26:49.204: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Oct 5 11:26:49.204: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.204: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Oct 5 11:26:49.204: INFO: Checking APIGroup: authorization.k8s.io Oct 5 11:26:49.206: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Oct 5 11:26:49.206: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.206: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Oct 5 11:26:49.206: INFO: Checking APIGroup: autoscaling Oct 5 11:26:49.208: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Oct 5 11:26:49.208: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Oct 5 11:26:49.208: INFO: autoscaling/v1 matches autoscaling/v1 Oct 5 11:26:49.208: INFO: Checking APIGroup: batch Oct 5 11:26:49.211: INFO: PreferredVersion.GroupVersion: batch/v1 Oct 5 11:26:49.211: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Oct 5 11:26:49.211: INFO: batch/v1 matches batch/v1 Oct 5 11:26:49.211: INFO: Checking APIGroup: certificates.k8s.io Oct 5 11:26:49.214: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Oct 5 11:26:49.214: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.214: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Oct 5 11:26:49.214: INFO: Checking APIGroup: networking.k8s.io Oct 5 11:26:49.216: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Oct 5 11:26:49.216: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.216: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Oct 5 11:26:49.216: INFO: Checking APIGroup: policy Oct 5 11:26:49.218: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Oct 5 11:26:49.218: INFO: Versions found [{policy/v1beta1 v1beta1}] Oct 5 11:26:49.218: INFO: policy/v1beta1 matches policy/v1beta1 Oct 5 11:26:49.218: INFO: Checking APIGroup: rbac.authorization.k8s.io Oct 5 11:26:49.220: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Oct 5 11:26:49.220: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.220: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Oct 5 11:26:49.220: INFO: Checking APIGroup: storage.k8s.io Oct 5 11:26:49.222: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Oct 5 11:26:49.222: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.222: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Oct 5 11:26:49.222: INFO: Checking APIGroup: admissionregistration.k8s.io Oct 5 11:26:49.225: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Oct 5 11:26:49.225: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.225: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Oct 5 11:26:49.225: INFO: Checking APIGroup: apiextensions.k8s.io Oct 5 11:26:49.226: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Oct 5 11:26:49.227: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.227: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Oct 5 11:26:49.227: INFO: Checking APIGroup: scheduling.k8s.io Oct 5 11:26:49.229: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Oct 5 11:26:49.229: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.229: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Oct 5 11:26:49.229: INFO: Checking APIGroup: coordination.k8s.io Oct 5 11:26:49.231: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Oct 5 11:26:49.231: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.231: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Oct 5 11:26:49.231: INFO: Checking APIGroup: node.k8s.io Oct 5 11:26:49.233: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1beta1 Oct 5 11:26:49.233: INFO: Versions found [{node.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.233: INFO: node.k8s.io/v1beta1 matches node.k8s.io/v1beta1 Oct 5 11:26:49.234: INFO: Checking APIGroup: discovery.k8s.io Oct 5 11:26:49.236: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Oct 5 11:26:49.236: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Oct 5 11:26:49.236: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:26:49.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-248" for this suite. • [SLOW TEST:5.679 seconds] [sig-api-machinery] Discovery /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should validate PreferredVersion for each APIGroup [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":303,"completed":266,"skipped":4325,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:26:49.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating the pod Oct 5 11:26:53.902: INFO: Successfully updated pod "labelsupdate7d7ee6c8-aeef-47ce-a265-ba15d59c3212" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:26:57.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1793" for this suite. • [SLOW TEST:8.731 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":303,"completed":267,"skipped":4335,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:26:57.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-8b1af1d9-479c-4694-9484-8cbdc5e662c1 STEP: Creating a pod to test consume configMaps Oct 5 11:26:58.092: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0" in namespace "projected-6609" to be "Succeeded or Failed" Oct 5 11:26:58.096: INFO: Pod "pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.709559ms Oct 5 11:27:00.102: INFO: Pod "pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010176711s Oct 5 11:27:02.111: INFO: Pod "pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0": Phase="Running", Reason="", readiness=true. Elapsed: 4.018720205s Oct 5 11:27:04.118: INFO: Pod "pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026016778s STEP: Saw pod success Oct 5 11:27:04.118: INFO: Pod "pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0" satisfied condition "Succeeded or Failed" Oct 5 11:27:04.123: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0 container projected-configmap-volume-test: STEP: delete the pod Oct 5 11:27:04.182: INFO: Waiting for pod pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0 to disappear Oct 5 11:27:04.243: INFO: Pod pod-projected-configmaps-49ab5cc6-a3b1-4f1a-8985-4f18cee7baf0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:27:04.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6609" for this suite. • [SLOW TEST:6.274 seconds] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":303,"completed":268,"skipped":4339,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:27:04.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:27:15.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5830" for this suite. • [SLOW TEST:11.148 seconds] [sig-api-machinery] ResourceQuota /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":303,"completed":269,"skipped":4377,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:27:15.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting the auto-created API token Oct 5 11:27:16.016: INFO: created pod pod-service-account-defaultsa Oct 5 11:27:16.017: INFO: pod pod-service-account-defaultsa service account token volume mount: true Oct 5 11:27:16.053: INFO: created pod pod-service-account-mountsa Oct 5 11:27:16.053: INFO: pod pod-service-account-mountsa service account token volume mount: true Oct 5 11:27:16.085: INFO: created pod pod-service-account-nomountsa Oct 5 11:27:16.085: INFO: pod pod-service-account-nomountsa service account token volume mount: false Oct 5 11:27:16.142: INFO: created pod pod-service-account-defaultsa-mountspec Oct 5 11:27:16.142: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Oct 5 11:27:16.195: INFO: created pod pod-service-account-mountsa-mountspec Oct 5 11:27:16.195: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Oct 5 11:27:16.224: INFO: created pod pod-service-account-nomountsa-mountspec Oct 5 11:27:16.224: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Oct 5 11:27:16.258: INFO: created pod pod-service-account-defaultsa-nomountspec Oct 5 11:27:16.258: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Oct 5 11:27:16.346: INFO: created pod pod-service-account-mountsa-nomountspec Oct 5 11:27:16.346: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Oct 5 11:27:16.355: INFO: created pod pod-service-account-nomountsa-nomountspec Oct 5 11:27:16.355: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:27:16.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3729" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":303,"completed":270,"skipped":4391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:27:16.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Oct 5 11:27:37.597: INFO: Successfully updated pod "pod-update-activedeadlineseconds-74333861-6281-4dfb-9952-a8e0c00a4b69" Oct 5 11:27:37.598: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-74333861-6281-4dfb-9952-a8e0c00a4b69" in namespace "pods-3930" to be "terminated due to deadline exceeded" Oct 5 11:27:37.619: INFO: Pod "pod-update-activedeadlineseconds-74333861-6281-4dfb-9952-a8e0c00a4b69": Phase="Running", Reason="", readiness=true. Elapsed: 21.233228ms Oct 5 11:27:39.647: INFO: Pod "pod-update-activedeadlineseconds-74333861-6281-4dfb-9952-a8e0c00a4b69": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.049029218s Oct 5 11:27:39.647: INFO: Pod "pod-update-activedeadlineseconds-74333861-6281-4dfb-9952-a8e0c00a4b69" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:27:39.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3930" for this suite. • [SLOW TEST:23.234 seconds] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":303,"completed":271,"skipped":4421,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:27:39.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:78 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:27:41.520: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Oct 5 11:27:46.525: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Oct 5 11:27:48.536: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 Oct 5 11:27:48.587: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-8701 /apis/apps/v1/namespaces/deployment-8701/deployments/test-cleanup-deployment 750946f5-3606-4b43-9bad-5eaa4ea107f5 3182853 1 2020-10-05 11:27:48 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-10-05 11:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{}}},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9312cf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Oct 5 11:27:48.615: INFO: New ReplicaSet "test-cleanup-deployment-5d446bdd47" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5d446bdd47 deployment-8701 /apis/apps/v1/namespaces/deployment-8701/replicasets/test-cleanup-deployment-5d446bdd47 b89505ab-b673-4ca1-9d47-8920e1d82770 3182855 1 2020-10-05 11:27:48 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 750946f5-3606-4b43-9bad-5eaa4ea107f5 0x9313187 0x9313188}] [] [{kube-controller-manager Update apps/v1 2020-10-05 11:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"750946f5-3606-4b43-9bad-5eaa4ea107f5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5d446bdd47,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.20 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0x9313218 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Oct 5 11:27:48.615: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Oct 5 11:27:48.616: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-8701 /apis/apps/v1/namespaces/deployment-8701/replicasets/test-cleanup-controller 19b19d64-b7fb-492e-b35d-becd44fb8fba 3182854 1 2020-10-05 11:27:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 750946f5-3606-4b43-9bad-5eaa4ea107f5 0x931306f 0x9313080}] [] [{e2e.test Update apps/v1 2020-10-05 11:27:41 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{"f:matchLabels":{".":{},"f:name":{},"f:pod":{}}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2020-10-05 11:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"750946f5-3606-4b43-9bad-5eaa4ea107f5\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0x9313118 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Oct 5 11:27:48.657: INFO: Pod "test-cleanup-controller-qnrk8" is available: &Pod{ObjectMeta:{test-cleanup-controller-qnrk8 test-cleanup-controller- deployment-8701 /api/v1/namespaces/deployment-8701/pods/test-cleanup-controller-qnrk8 5c7ec6f0-6621-4cb0-a658-7b16f5f2106e 3182845 0 2020-10-05 11:27:41 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 19b19d64-b7fb-492e-b35d-becd44fb8fba 0x90ddcd7 0x90ddcd8}] [] [{kube-controller-manager Update v1 2020-10-05 11:27:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"19b19d64-b7fb-492e-b35d-becd44fb8fba\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2020-10-05 11:27:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.178\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-llpb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-llpb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-llpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:27:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:27:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:27:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:27:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.1.178,StartTime:2020-10-05 11:27:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-10-05 11:27:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0295a31cb462577362b1239d3f8b531483472eb1419a85cfe583fd53841e88aa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Oct 5 11:27:48.660: INFO: Pod "test-cleanup-deployment-5d446bdd47-p5wbj" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-5d446bdd47-p5wbj test-cleanup-deployment-5d446bdd47- deployment-8701 /api/v1/namespaces/deployment-8701/pods/test-cleanup-deployment-5d446bdd47-p5wbj cc578c82-35f7-4689-b91c-f7850e89f8aa 3182860 0 2020-10-05 11:27:48 +0000 UTC map[name:cleanup-pod pod-template-hash:5d446bdd47] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-5d446bdd47 b89505ab-b673-4ca1-9d47-8920e1d82770 0x90ddea7 0x90ddea8}] [] [{kube-controller-manager Update v1 2020-10-05 11:27:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b89505ab-b673-4ca1-9d47-8920e1d82770\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-llpb6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-llpb6,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.20,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-llpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-10-05 11:27:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:27:48.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8701" for this suite. • [SLOW TEST:9.024 seconds] [sig-apps] Deployment /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":303,"completed":272,"skipped":4441,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:27:48.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod with failed condition STEP: updating the pod Oct 5 11:29:49.401: INFO: Successfully updated pod "var-expansion-242d4a90-6b86-4fd5-aa1c-348046e7baf1" STEP: waiting for pod running STEP: deleting the pod gracefully Oct 5 11:29:51.470: INFO: Deleting pod "var-expansion-242d4a90-6b86-4fd5-aa1c-348046e7baf1" in namespace "var-expansion-2736" Oct 5 11:29:51.476: INFO: Wait up to 5m0s for pod "var-expansion-242d4a90-6b86-4fd5-aa1c-348046e7baf1" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:30:25.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2736" for this suite. • [SLOW TEST:156.766 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":303,"completed":273,"skipped":4448,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:30:25.530: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:30:25.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7086" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":303,"completed":274,"skipped":4460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:30:25.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward API volume plugin Oct 5 11:30:25.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19fdc174-7a55-4ed7-95da-d33c2c57bb25" in namespace "projected-9864" to be "Succeeded or Failed" Oct 5 11:30:25.787: INFO: Pod "downwardapi-volume-19fdc174-7a55-4ed7-95da-d33c2c57bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087802ms Oct 5 11:30:27.803: INFO: Pod "downwardapi-volume-19fdc174-7a55-4ed7-95da-d33c2c57bb25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023008699s Oct 5 11:30:29.814: INFO: Pod "downwardapi-volume-19fdc174-7a55-4ed7-95da-d33c2c57bb25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033177627s STEP: Saw pod success Oct 5 11:30:29.814: INFO: Pod "downwardapi-volume-19fdc174-7a55-4ed7-95da-d33c2c57bb25" satisfied condition "Succeeded or Failed" Oct 5 11:30:29.819: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-19fdc174-7a55-4ed7-95da-d33c2c57bb25 container client-container: STEP: delete the pod Oct 5 11:30:29.865: INFO: Waiting for pod downwardapi-volume-19fdc174-7a55-4ed7-95da-d33c2c57bb25 to disappear Oct 5 11:30:29.876: INFO: Pod downwardapi-volume-19fdc174-7a55-4ed7-95da-d33c2c57bb25 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:30:29.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9864" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":275,"skipped":4488,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:30:29.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Oct 5 11:30:30.067: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:30.074: INFO: Number of nodes with available pods: 0 Oct 5 11:30:30.074: INFO: Node kali-worker is running more than one daemon pod Oct 5 11:30:31.084: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:31.089: INFO: Number of nodes with available pods: 0 Oct 5 11:30:31.089: INFO: Node kali-worker is running more than one daemon pod Oct 5 11:30:32.085: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:32.091: INFO: Number of nodes with available pods: 0 Oct 5 11:30:32.091: INFO: Node kali-worker is running more than one daemon pod Oct 5 11:30:33.109: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:33.117: INFO: Number of nodes with available pods: 0 Oct 5 11:30:33.117: INFO: Node kali-worker is running more than one daemon pod Oct 5 11:30:34.090: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:34.152: INFO: Number of nodes with available pods: 1 Oct 5 11:30:34.153: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 11:30:35.082: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:35.088: INFO: Number of nodes with available pods: 2 Oct 5 11:30:35.088: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Oct 5 11:30:35.157: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:35.173: INFO: Number of nodes with available pods: 1 Oct 5 11:30:35.173: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 11:30:36.185: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:36.191: INFO: Number of nodes with available pods: 1 Oct 5 11:30:36.191: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 11:30:37.257: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:37.264: INFO: Number of nodes with available pods: 1 Oct 5 11:30:37.264: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 11:30:38.187: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:38.194: INFO: Number of nodes with available pods: 1 Oct 5 11:30:38.195: INFO: Node kali-worker2 is running more than one daemon pod Oct 5 11:30:39.187: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Oct 5 11:30:39.195: INFO: Number of nodes with available pods: 2 Oct 5 11:30:39.196: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3512, will wait for the garbage collector to delete the pods Oct 5 11:30:39.269: INFO: Deleting DaemonSet.extensions daemon-set took: 9.211077ms Oct 5 11:30:39.670: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.832717ms Oct 5 11:30:48.689: INFO: Number of nodes with available pods: 0 Oct 5 11:30:48.689: INFO: Number of running nodes: 0, number of available pods: 0 Oct 5 11:30:48.695: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3512/daemonsets","resourceVersion":"3183544"},"items":null} Oct 5 11:30:48.698: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3512/pods","resourceVersion":"3183544"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:30:48.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3512" for this suite. • [SLOW TEST:18.836 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":303,"completed":276,"skipped":4502,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:30:48.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Oct 5 11:30:55.396: INFO: Successfully updated pod "adopt-release-2gqk6" STEP: Checking that the Job readopts the Pod Oct 5 11:30:55.396: INFO: Waiting up to 15m0s for pod "adopt-release-2gqk6" in namespace "job-3471" to be "adopted" Oct 5 11:30:55.427: INFO: Pod "adopt-release-2gqk6": Phase="Running", Reason="", readiness=true. Elapsed: 31.06677ms Oct 5 11:30:57.445: INFO: Pod "adopt-release-2gqk6": Phase="Running", Reason="", readiness=true. Elapsed: 2.049154188s Oct 5 11:30:57.446: INFO: Pod "adopt-release-2gqk6" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Oct 5 11:30:57.965: INFO: Successfully updated pod "adopt-release-2gqk6" STEP: Checking that the Job releases the Pod Oct 5 11:30:57.965: INFO: Waiting up to 15m0s for pod "adopt-release-2gqk6" in namespace "job-3471" to be "released" Oct 5 11:30:57.983: INFO: Pod "adopt-release-2gqk6": Phase="Running", Reason="", readiness=true. Elapsed: 18.122024ms Oct 5 11:31:00.010: INFO: Pod "adopt-release-2gqk6": Phase="Running", Reason="", readiness=true. Elapsed: 2.0452931s Oct 5 11:31:00.011: INFO: Pod "adopt-release-2gqk6" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:31:00.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3471" for this suite. • [SLOW TEST:11.291 seconds] [sig-apps] Job /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":303,"completed":277,"skipped":4538,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:31:00.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test downward api env vars Oct 5 11:31:00.459: INFO: Waiting up to 5m0s for pod "downward-api-211d0629-e6e4-4ffc-a572-22dbda1d5e20" in namespace "downward-api-6334" to be "Succeeded or Failed" Oct 5 11:31:00.514: INFO: Pod "downward-api-211d0629-e6e4-4ffc-a572-22dbda1d5e20": Phase="Pending", Reason="", readiness=false. Elapsed: 54.847761ms Oct 5 11:31:02.521: INFO: Pod "downward-api-211d0629-e6e4-4ffc-a572-22dbda1d5e20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062303862s Oct 5 11:31:04.529: INFO: Pod "downward-api-211d0629-e6e4-4ffc-a572-22dbda1d5e20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069703882s STEP: Saw pod success Oct 5 11:31:04.529: INFO: Pod "downward-api-211d0629-e6e4-4ffc-a572-22dbda1d5e20" satisfied condition "Succeeded or Failed" Oct 5 11:31:04.534: INFO: Trying to get logs from node kali-worker pod downward-api-211d0629-e6e4-4ffc-a572-22dbda1d5e20 container dapi-container: STEP: delete the pod Oct 5 11:31:04.615: INFO: Waiting for pod downward-api-211d0629-e6e4-4ffc-a572-22dbda1d5e20 to disappear Oct 5 11:31:04.642: INFO: Pod downward-api-211d0629-e6e4-4ffc-a572-22dbda1d5e20 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:31:04.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6334" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":303,"completed":278,"skipped":4548,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:31:04.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:782 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:31:04.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6907" for this suite. [AfterEach] [sig-network] Services /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:786 •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":303,"completed":279,"skipped":4556,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:31:04.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-dccbd852-7d89-45a3-8e89-fe79df3c80a7 STEP: Creating a pod to test consume configMaps Oct 5 11:31:04.984: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8ff8215-d283-4d4c-a2fe-109f3e63f380" in namespace "projected-2359" to be "Succeeded or Failed" Oct 5 11:31:05.005: INFO: Pod "pod-projected-configmaps-b8ff8215-d283-4d4c-a2fe-109f3e63f380": Phase="Pending", Reason="", readiness=false. Elapsed: 20.475277ms Oct 5 11:31:07.013: INFO: Pod "pod-projected-configmaps-b8ff8215-d283-4d4c-a2fe-109f3e63f380": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028396359s Oct 5 11:31:09.021: INFO: Pod "pod-projected-configmaps-b8ff8215-d283-4d4c-a2fe-109f3e63f380": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036457462s STEP: Saw pod success Oct 5 11:31:09.021: INFO: Pod "pod-projected-configmaps-b8ff8215-d283-4d4c-a2fe-109f3e63f380" satisfied condition "Succeeded or Failed" Oct 5 11:31:09.026: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-b8ff8215-d283-4d4c-a2fe-109f3e63f380 container projected-configmap-volume-test: STEP: delete the pod Oct 5 11:31:09.080: INFO: Waiting for pod pod-projected-configmaps-b8ff8215-d283-4d4c-a2fe-109f3e63f380 to disappear Oct 5 11:31:09.092: INFO: Pod pod-projected-configmaps-b8ff8215-d283-4d4c-a2fe-109f3e63f380 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:31:09.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2359" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":280,"skipped":4556,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:31:09.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-preemption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:89 Oct 5 11:31:09.220: INFO: Waiting up to 1m0s for all nodes to be ready Oct 5 11:32:09.303: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Create pods that use 2/3 of node resources. Oct 5 11:32:09.372: INFO: Created pod: pod0-sched-preemption-low-priority Oct 5 11:32:09.464: INFO: Created pod: pod1-sched-preemption-medium-priority STEP: Wait for pods to be scheduled. STEP: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:32:37.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-preemption-5055" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:77 • [SLOW TEST:88.594 seconds] [sig-scheduling] SchedulerPreemption [Serial] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates basic preemption works [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":303,"completed":281,"skipped":4574,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:32:37.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-shm2x in namespace proxy-1906 I1005 11:32:37.847997 10 runners.go:190] Created replication controller with name: proxy-service-shm2x, namespace: proxy-1906, replica count: 1 I1005 11:32:38.899587 10 runners.go:190] proxy-service-shm2x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:32:39.900266 10 runners.go:190] proxy-service-shm2x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:32:40.901372 10 runners.go:190] proxy-service-shm2x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1005 11:32:41.902797 10 runners.go:190] proxy-service-shm2x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I1005 11:32:42.903571 10 runners.go:190] proxy-service-shm2x Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Oct 5 11:32:42.931: INFO: setup took 5.158787814s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Oct 5 11:32:42.959: INFO: (0) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 26.545847ms) Oct 5 11:32:42.959: INFO: (0) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 26.038495ms) Oct 5 11:32:42.959: INFO: (0) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 26.975988ms) Oct 5 11:32:42.964: INFO: (0) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 31.506176ms) Oct 5 11:32:42.964: INFO: (0) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 32.239381ms) Oct 5 11:32:42.965: INFO: (0) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 32.724509ms) Oct 5 11:32:42.965: INFO: (0) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 33.01026ms) Oct 5 11:32:42.965: INFO: (0) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 32.759327ms) Oct 5 11:32:42.968: INFO: (0) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 35.984352ms) Oct 5 11:32:42.968: INFO: (0) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 36.37421ms) Oct 5 11:32:42.970: INFO: (0) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 37.136426ms) Oct 5 11:32:42.990: INFO: (0) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: ... (200; 26.582439ms) Oct 5 11:32:43.018: INFO: (1) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 26.592401ms) Oct 5 11:32:43.019: INFO: (1) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 27.270875ms) Oct 5 11:32:43.019: INFO: (1) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 27.84118ms) Oct 5 11:32:43.019: INFO: (1) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 27.566799ms) Oct 5 11:32:43.020: INFO: (1) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 28.115081ms) Oct 5 11:32:43.020: INFO: (1) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 27.768508ms) Oct 5 11:32:43.020: INFO: (1) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 28.024919ms) Oct 5 11:32:43.020: INFO: (1) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 28.083419ms) Oct 5 11:32:43.020: INFO: (1) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 28.211789ms) Oct 5 11:32:43.020: INFO: (1) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 28.594707ms) Oct 5 11:32:43.020: INFO: (1) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 28.687967ms) Oct 5 11:32:43.022: INFO: (1) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 30.068252ms) Oct 5 11:32:43.022: INFO: (1) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 29.861836ms) Oct 5 11:32:43.022: INFO: (1) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 28.361871ms) Oct 5 11:32:43.052: INFO: (2) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 29.603736ms) Oct 5 11:32:43.052: INFO: (2) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 29.28398ms) Oct 5 11:32:43.052: INFO: (2) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 29.363652ms) Oct 5 11:32:43.052: INFO: (2) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 30.008755ms) Oct 5 11:32:43.052: INFO: (2) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 29.939428ms) Oct 5 11:32:43.053: INFO: (2) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 29.971369ms) Oct 5 11:32:43.053: INFO: (2) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: ... (200; 10.524618ms) Oct 5 11:32:43.066: INFO: (3) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 11.289771ms) Oct 5 11:32:43.066: INFO: (3) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 11.235519ms) Oct 5 11:32:43.066: INFO: (3) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 11.483912ms) Oct 5 11:32:43.066: INFO: (3) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 11.023741ms) Oct 5 11:32:43.066: INFO: (3) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 11.206184ms) Oct 5 11:32:43.067: INFO: (3) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 11.72125ms) Oct 5 11:32:43.067: INFO: (3) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 12.15223ms) Oct 5 11:32:43.067: INFO: (3) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 11.985576ms) Oct 5 11:32:43.067: INFO: (3) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 12.391454ms) Oct 5 11:32:43.067: INFO: (3) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test (200; 12.816547ms) Oct 5 11:32:43.072: INFO: (4) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 4.503784ms) Oct 5 11:32:43.073: INFO: (4) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 4.567306ms) Oct 5 11:32:43.075: INFO: (4) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 6.792639ms) Oct 5 11:32:43.075: INFO: (4) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: ... (200; 8.149028ms) Oct 5 11:32:43.076: INFO: (4) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 8.027888ms) Oct 5 11:32:43.076: INFO: (4) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 8.316015ms) Oct 5 11:32:43.076: INFO: (4) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 8.554665ms) Oct 5 11:32:43.077: INFO: (4) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 8.578763ms) Oct 5 11:32:43.077: INFO: (4) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 9.06869ms) Oct 5 11:32:43.077: INFO: (4) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 9.256758ms) Oct 5 11:32:43.078: INFO: (4) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 9.437419ms) Oct 5 11:32:43.078: INFO: (4) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 9.667981ms) Oct 5 11:32:43.078: INFO: (4) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 10.35878ms) Oct 5 11:32:43.082: INFO: (5) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 3.984064ms) Oct 5 11:32:43.083: INFO: (5) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test (200; 5.462977ms) Oct 5 11:32:43.086: INFO: (5) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 7.137203ms) Oct 5 11:32:43.086: INFO: (5) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 7.384069ms) Oct 5 11:32:43.086: INFO: (5) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 7.65004ms) Oct 5 11:32:43.086: INFO: (5) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 7.936914ms) Oct 5 11:32:43.086: INFO: (5) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 7.822665ms) Oct 5 11:32:43.086: INFO: (5) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 8.071262ms) Oct 5 11:32:43.087: INFO: (5) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 8.175908ms) Oct 5 11:32:43.087: INFO: (5) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 8.167736ms) Oct 5 11:32:43.087: INFO: (5) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 8.292328ms) Oct 5 11:32:43.087: INFO: (5) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 8.60274ms) Oct 5 11:32:43.088: INFO: (5) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 9.549797ms) Oct 5 11:32:43.088: INFO: (5) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 9.938813ms) Oct 5 11:32:43.094: INFO: (6) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 5.519181ms) Oct 5 11:32:43.094: INFO: (6) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 5.635621ms) Oct 5 11:32:43.094: INFO: (6) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 5.733819ms) Oct 5 11:32:43.095: INFO: (6) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 6.207554ms) Oct 5 11:32:43.095: INFO: (6) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 5.919899ms) Oct 5 11:32:43.095: INFO: (6) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 6.376837ms) Oct 5 11:32:43.095: INFO: (6) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 6.56628ms) Oct 5 11:32:43.095: INFO: (6) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 6.718367ms) Oct 5 11:32:43.095: INFO: (6) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 6.88357ms) Oct 5 11:32:43.096: INFO: (6) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 6.977704ms) Oct 5 11:32:43.096: INFO: (6) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 7.18234ms) Oct 5 11:32:43.096: INFO: (6) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 6.563514ms) Oct 5 11:32:43.104: INFO: (7) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 6.780177ms) Oct 5 11:32:43.107: INFO: (7) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 9.853045ms) Oct 5 11:32:43.107: INFO: (7) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 9.715969ms) Oct 5 11:32:43.107: INFO: (7) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 10.315747ms) Oct 5 11:32:43.107: INFO: (7) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 10.240876ms) Oct 5 11:32:43.108: INFO: (7) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 10.386535ms) Oct 5 11:32:43.108: INFO: (7) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 10.501913ms) Oct 5 11:32:43.108: INFO: (7) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 10.470982ms) Oct 5 11:32:43.108: INFO: (7) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 10.773646ms) Oct 5 11:32:43.108: INFO: (7) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 10.546498ms) Oct 5 11:32:43.108: INFO: (7) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 10.723669ms) Oct 5 11:32:43.112: INFO: (8) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 3.915057ms) Oct 5 11:32:43.114: INFO: (8) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 5.325977ms) Oct 5 11:32:43.114: INFO: (8) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 5.491065ms) Oct 5 11:32:43.114: INFO: (8) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 5.66804ms) Oct 5 11:32:43.114: INFO: (8) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 6.273238ms) Oct 5 11:32:43.114: INFO: (8) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 6.180373ms) Oct 5 11:32:43.116: INFO: (8) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 7.783566ms) Oct 5 11:32:43.116: INFO: (8) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 7.520632ms) Oct 5 11:32:43.116: INFO: (8) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 7.932556ms) Oct 5 11:32:43.116: INFO: (8) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 7.980667ms) Oct 5 11:32:43.116: INFO: (8) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 7.909509ms) Oct 5 11:32:43.117: INFO: (8) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 8.575179ms) Oct 5 11:32:43.117: INFO: (8) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 8.555836ms) Oct 5 11:32:43.117: INFO: (8) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: ... (200; 31.499849ms) Oct 5 11:32:43.241: INFO: (9) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 32.206012ms) Oct 5 11:32:43.241: INFO: (9) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test (200; 35.167414ms) Oct 5 11:32:43.244: INFO: (9) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 35.391585ms) Oct 5 11:32:43.244: INFO: (9) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 36.205517ms) Oct 5 11:32:43.244: INFO: (9) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 36.447041ms) Oct 5 11:32:43.250: INFO: (10) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 5.51708ms) Oct 5 11:32:43.251: INFO: (10) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 6.922905ms) Oct 5 11:32:43.251: INFO: (10) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 6.982955ms) Oct 5 11:32:43.251: INFO: (10) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 6.542091ms) Oct 5 11:32:43.251: INFO: (10) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 6.334385ms) Oct 5 11:32:43.252: INFO: (10) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 6.423086ms) Oct 5 11:32:43.252: INFO: (10) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 7.023649ms) Oct 5 11:32:43.253: INFO: (10) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 9.567756ms) Oct 5 11:32:43.254: INFO: (10) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 10.063973ms) Oct 5 11:32:43.255: INFO: (10) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 9.483251ms) Oct 5 11:32:43.255: INFO: (10) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 9.893784ms) Oct 5 11:32:43.255: INFO: (10) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 9.775633ms) Oct 5 11:32:43.255: INFO: (10) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 9.965096ms) Oct 5 11:32:43.255: INFO: (10) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 9.881406ms) Oct 5 11:32:43.489: INFO: (11) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 234.126256ms) Oct 5 11:32:43.490: INFO: (11) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: ... (200; 234.198618ms) Oct 5 11:32:43.493: INFO: (11) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 237.817096ms) Oct 5 11:32:43.493: INFO: (11) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 238.165319ms) Oct 5 11:32:43.493: INFO: (11) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 238.30304ms) Oct 5 11:32:43.494: INFO: (11) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 238.616206ms) Oct 5 11:32:43.494: INFO: (11) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 238.842181ms) Oct 5 11:32:43.494: INFO: (11) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 238.878033ms) Oct 5 11:32:43.494: INFO: (11) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 238.865675ms) Oct 5 11:32:43.494: INFO: (11) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 239.129339ms) Oct 5 11:32:43.494: INFO: (11) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 239.388803ms) Oct 5 11:32:43.495: INFO: (11) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 239.334571ms) Oct 5 11:32:43.495: INFO: (11) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 239.732013ms) Oct 5 11:32:43.495: INFO: (11) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 239.554936ms) Oct 5 11:32:43.495: INFO: (11) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 239.438204ms) Oct 5 11:32:43.500: INFO: (12) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 5.090929ms) Oct 5 11:32:43.502: INFO: (12) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 6.63846ms) Oct 5 11:32:43.502: INFO: (12) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 5.807249ms) Oct 5 11:32:43.502: INFO: (12) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 5.616616ms) Oct 5 11:32:43.502: INFO: (12) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 5.72492ms) Oct 5 11:32:43.505: INFO: (12) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 8.661001ms) Oct 5 11:32:43.505: INFO: (12) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 8.199725ms) Oct 5 11:32:43.506: INFO: (12) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 8.867225ms) Oct 5 11:32:43.506: INFO: (12) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 8.955778ms) Oct 5 11:32:43.506: INFO: (12) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 9.427437ms) Oct 5 11:32:43.506: INFO: (12) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 9.020551ms) Oct 5 11:32:43.506: INFO: (12) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 9.369724ms) Oct 5 11:32:43.506: INFO: (12) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 10.580134ms) Oct 5 11:32:43.512: INFO: (13) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 5.180031ms) Oct 5 11:32:43.513: INFO: (13) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 5.831015ms) Oct 5 11:32:43.513: INFO: (13) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 6.229038ms) Oct 5 11:32:43.513: INFO: (13) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 6.470751ms) Oct 5 11:32:43.513: INFO: (13) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 6.504933ms) Oct 5 11:32:43.513: INFO: (13) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 7.416564ms) Oct 5 11:32:43.514: INFO: (13) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 7.888002ms) Oct 5 11:32:43.514: INFO: (13) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 7.698344ms) Oct 5 11:32:43.515: INFO: (13) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 8.267726ms) Oct 5 11:32:43.515: INFO: (13) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 8.447885ms) Oct 5 11:32:43.516: INFO: (13) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 8.906065ms) Oct 5 11:32:43.519: INFO: (14) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 3.530778ms) Oct 5 11:32:43.520: INFO: (14) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 4.095719ms) Oct 5 11:32:43.521: INFO: (14) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 4.975298ms) Oct 5 11:32:43.522: INFO: (14) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: ... (200; 6.056674ms) Oct 5 11:32:43.522: INFO: (14) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 6.335385ms) Oct 5 11:32:43.522: INFO: (14) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 6.332411ms) Oct 5 11:32:43.522: INFO: (14) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 6.326454ms) Oct 5 11:32:43.523: INFO: (14) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 6.626042ms) Oct 5 11:32:43.523: INFO: (14) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 6.724344ms) Oct 5 11:32:43.524: INFO: (14) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 7.67405ms) Oct 5 11:32:43.524: INFO: (14) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 7.756445ms) Oct 5 11:32:43.524: INFO: (14) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 7.797125ms) Oct 5 11:32:43.524: INFO: (14) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 8.091894ms) Oct 5 11:32:43.524: INFO: (14) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 8.25359ms) Oct 5 11:32:43.525: INFO: (14) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 8.54384ms) Oct 5 11:32:43.529: INFO: (15) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 4.444241ms) Oct 5 11:32:43.529: INFO: (15) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 4.535882ms) Oct 5 11:32:43.529: INFO: (15) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 4.626477ms) Oct 5 11:32:43.530: INFO: (15) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 5.132395ms) Oct 5 11:32:43.530: INFO: (15) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 5.667334ms) Oct 5 11:32:43.531: INFO: (15) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 5.686632ms) Oct 5 11:32:43.531: INFO: (15) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 6.106148ms) Oct 5 11:32:43.531: INFO: (15) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 6.323708ms) Oct 5 11:32:43.531: INFO: (15) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 6.321935ms) Oct 5 11:32:43.531: INFO: (15) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 6.298728ms) Oct 5 11:32:43.531: INFO: (15) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 6.443653ms) Oct 5 11:32:43.532: INFO: (15) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 6.571315ms) Oct 5 11:32:43.532: INFO: (15) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 6.774488ms) Oct 5 11:32:43.532: INFO: (15) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 6.85356ms) Oct 5 11:32:43.533: INFO: (15) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 7.578384ms) Oct 5 11:32:43.536: INFO: (16) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 3.373797ms) Oct 5 11:32:43.537: INFO: (16) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 3.910039ms) Oct 5 11:32:43.537: INFO: (16) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 4.102487ms) Oct 5 11:32:43.538: INFO: (16) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 4.756231ms) Oct 5 11:32:43.538: INFO: (16) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 5.522511ms) Oct 5 11:32:43.539: INFO: (16) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 5.747889ms) Oct 5 11:32:43.539: INFO: (16) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 5.651289ms) Oct 5 11:32:43.539: INFO: (16) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 6.077681ms) Oct 5 11:32:43.539: INFO: (16) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 6.291239ms) Oct 5 11:32:43.540: INFO: (16) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 6.734185ms) Oct 5 11:32:43.540: INFO: (16) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 7.28141ms) Oct 5 11:32:43.540: INFO: (16) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 7.19737ms) Oct 5 11:32:43.541: INFO: (16) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 7.686275ms) Oct 5 11:32:43.546: INFO: (17) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname2/proxy/: tls qux (200; 5.130697ms) Oct 5 11:32:43.547: INFO: (17) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 5.509596ms) Oct 5 11:32:43.547: INFO: (17) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 5.275711ms) Oct 5 11:32:43.547: INFO: (17) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 4.971596ms) Oct 5 11:32:43.547: INFO: (17) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 5.62763ms) Oct 5 11:32:43.548: INFO: (17) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 6.762283ms) Oct 5 11:32:43.548: INFO: (17) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 6.56207ms) Oct 5 11:32:43.548: INFO: (17) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 6.27254ms) Oct 5 11:32:43.548: INFO: (17) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname1/proxy/: foo (200; 6.446203ms) Oct 5 11:32:43.549: INFO: (17) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 7.549134ms) Oct 5 11:32:43.550: INFO: (17) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 8.100793ms) Oct 5 11:32:43.550: INFO: (17) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 7.975349ms) Oct 5 11:32:43.550: INFO: (17) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 8.579064ms) Oct 5 11:32:43.550: INFO: (17) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 8.465834ms) Oct 5 11:32:43.554: INFO: (18) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 3.364582ms) Oct 5 11:32:43.554: INFO: (18) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 3.765026ms) Oct 5 11:32:43.556: INFO: (18) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test<... (200; 6.288663ms) Oct 5 11:32:43.557: INFO: (18) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7/proxy/: test (200; 6.212725ms) Oct 5 11:32:43.557: INFO: (18) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 6.758982ms) Oct 5 11:32:43.557: INFO: (18) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 6.785887ms) Oct 5 11:32:43.558: INFO: (18) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 6.908632ms) Oct 5 11:32:43.558: INFO: (18) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 6.964012ms) Oct 5 11:32:43.558: INFO: (18) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 7.664941ms) Oct 5 11:32:43.562: INFO: (19) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:462/proxy/: tls qux (200; 3.370867ms) Oct 5 11:32:43.563: INFO: (19) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:1080/proxy/: test<... (200; 3.978409ms) Oct 5 11:32:43.563: INFO: (19) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:443/proxy/: test (200; 6.598893ms) Oct 5 11:32:43.565: INFO: (19) /api/v1/namespaces/proxy-1906/pods/proxy-service-shm2x-jfvs7:162/proxy/: bar (200; 6.927986ms) Oct 5 11:32:43.565: INFO: (19) /api/v1/namespaces/proxy-1906/services/https:proxy-service-shm2x:tlsportname1/proxy/: tls baz (200; 7.0143ms) Oct 5 11:32:43.566: INFO: (19) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:1080/proxy/: ... (200; 6.952641ms) Oct 5 11:32:43.566: INFO: (19) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname2/proxy/: bar (200; 7.179001ms) Oct 5 11:32:43.566: INFO: (19) /api/v1/namespaces/proxy-1906/services/proxy-service-shm2x:portname2/proxy/: bar (200; 7.563089ms) Oct 5 11:32:43.566: INFO: (19) /api/v1/namespaces/proxy-1906/services/http:proxy-service-shm2x:portname1/proxy/: foo (200; 7.426347ms) Oct 5 11:32:43.566: INFO: (19) /api/v1/namespaces/proxy-1906/pods/https:proxy-service-shm2x-jfvs7:460/proxy/: tls baz (200; 7.519835ms) Oct 5 11:32:43.566: INFO: (19) /api/v1/namespaces/proxy-1906/pods/http:proxy-service-shm2x-jfvs7:160/proxy/: foo (200; 7.684301ms) STEP: deleting ReplicationController proxy-service-shm2x in namespace proxy-1906, will wait for the garbage collector to delete the pods Oct 5 11:32:43.625: INFO: Deleting ReplicationController proxy-service-shm2x took: 4.949762ms Oct 5 11:32:44.226: INFO: Terminating ReplicationController proxy-service-shm2x pods took: 600.67202ms [AfterEach] version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:32:48.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1906" for this suite. • [SLOW TEST:11.040 seconds] [sig-network] Proxy /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59 should proxy through a service and a pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":303,"completed":282,"skipped":4589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:32:48.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9990.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9990.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9990.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9990.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9990.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 11:32:54.926: INFO: DNS probes using dns-9990/dns-test-7668f084-2884-43d9-be8d-4d02f259c331 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:32:54.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9990" for this suite. • [SLOW TEST:6.336 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":303,"completed":283,"skipped":4639,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:32:55.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-projected-all-test-volume-4e4b7ddb-48bd-41dc-b1ed-b90938e36f2d STEP: Creating secret with name secret-projected-all-test-volume-0686acaa-d695-49bf-8f7d-0f060a0bcecb STEP: Creating a pod to test Check all projections for projected volume plugin Oct 5 11:32:55.758: INFO: Waiting up to 5m0s for pod "projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88" in namespace "projected-1891" to be "Succeeded or Failed" Oct 5 11:32:55.811: INFO: Pod "projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88": Phase="Pending", Reason="", readiness=false. Elapsed: 52.719641ms Oct 5 11:32:57.820: INFO: Pod "projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0624875s Oct 5 11:32:59.860: INFO: Pod "projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88": Phase="Running", Reason="", readiness=true. Elapsed: 4.101908225s Oct 5 11:33:01.867: INFO: Pod "projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109480853s STEP: Saw pod success Oct 5 11:33:01.868: INFO: Pod "projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88" satisfied condition "Succeeded or Failed" Oct 5 11:33:01.872: INFO: Trying to get logs from node kali-worker pod projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88 container projected-all-volume-test: STEP: delete the pod Oct 5 11:33:01.910: INFO: Waiting for pod projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88 to disappear Oct 5 11:33:01.914: INFO: Pod projected-volume-fa103f5d-7f9b-4b2a-ae9c-35448c16aa88 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:33:01.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1891" for this suite. • [SLOW TEST:6.884 seconds] [sig-storage] Projected combined /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":303,"completed":284,"skipped":4648,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:33:01.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-f72f99a1-1f30-43a5-99b4-e2756a53dfb0 STEP: Creating a pod to test consume secrets Oct 5 11:33:02.090: INFO: Waiting up to 5m0s for pod "pod-secrets-995e135f-7c3f-48f3-a86e-dcbe39fc800d" in namespace "secrets-9338" to be "Succeeded or Failed" Oct 5 11:33:02.101: INFO: Pod "pod-secrets-995e135f-7c3f-48f3-a86e-dcbe39fc800d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.750596ms Oct 5 11:33:04.110: INFO: Pod "pod-secrets-995e135f-7c3f-48f3-a86e-dcbe39fc800d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019316362s Oct 5 11:33:06.118: INFO: Pod "pod-secrets-995e135f-7c3f-48f3-a86e-dcbe39fc800d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026880162s STEP: Saw pod success Oct 5 11:33:06.118: INFO: Pod "pod-secrets-995e135f-7c3f-48f3-a86e-dcbe39fc800d" satisfied condition "Succeeded or Failed" Oct 5 11:33:06.123: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-995e135f-7c3f-48f3-a86e-dcbe39fc800d container secret-volume-test: STEP: delete the pod Oct 5 11:33:06.186: INFO: Waiting for pod pod-secrets-995e135f-7c3f-48f3-a86e-dcbe39fc800d to disappear Oct 5 11:33:06.212: INFO: Pod pod-secrets-995e135f-7c3f-48f3-a86e-dcbe39fc800d no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:33:06.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9338" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":303,"completed":285,"skipped":4673,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:33:06.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6701.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6701.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6701.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6701.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6701.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6701.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 51.17.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.17.51_udp@PTR;check="$$(dig +tcp +noall +answer +search 51.17.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.17.51_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6701.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6701.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6701.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6701.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6701.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6701.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6701.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6701.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6701.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 51.17.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.17.51_udp@PTR;check="$$(dig +tcp +noall +answer +search 51.17.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.17.51_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Oct 5 11:33:12.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:12.466: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:12.471: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:12.475: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:12.509: INFO: Unable to read jessie_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:12.512: INFO: Unable to read jessie_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:12.516: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:12.521: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:12.548: INFO: Lookups using dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0 failed for: [wheezy_udp@dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_udp@dns-test-service.dns-6701.svc.cluster.local jessie_tcp@dns-test-service.dns-6701.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local] Oct 5 11:33:17.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:17.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:17.567: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:17.571: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:17.606: INFO: Unable to read jessie_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:17.610: INFO: Unable to read jessie_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:17.613: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:17.618: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:17.644: INFO: Lookups using dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0 failed for: [wheezy_udp@dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_udp@dns-test-service.dns-6701.svc.cluster.local jessie_tcp@dns-test-service.dns-6701.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local] Oct 5 11:33:22.555: INFO: Unable to read wheezy_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:22.560: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:22.566: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:22.570: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:22.602: INFO: Unable to read jessie_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:22.607: INFO: Unable to read jessie_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:22.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:22.617: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:22.641: INFO: Lookups using dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0 failed for: [wheezy_udp@dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_udp@dns-test-service.dns-6701.svc.cluster.local jessie_tcp@dns-test-service.dns-6701.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local] Oct 5 11:33:27.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:27.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:27.567: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:27.572: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:27.617: INFO: Unable to read jessie_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:27.621: INFO: Unable to read jessie_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:27.624: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:27.626: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:27.645: INFO: Lookups using dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0 failed for: [wheezy_udp@dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_udp@dns-test-service.dns-6701.svc.cluster.local jessie_tcp@dns-test-service.dns-6701.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local] Oct 5 11:33:32.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:32.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:32.566: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:32.571: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:32.604: INFO: Unable to read jessie_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:32.608: INFO: Unable to read jessie_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:32.612: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:32.616: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:32.640: INFO: Lookups using dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0 failed for: [wheezy_udp@dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_udp@dns-test-service.dns-6701.svc.cluster.local jessie_tcp@dns-test-service.dns-6701.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local] Oct 5 11:33:37.556: INFO: Unable to read wheezy_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:37.561: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:37.565: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:37.568: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:37.599: INFO: Unable to read jessie_udp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:37.604: INFO: Unable to read jessie_tcp@dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:37.609: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:37.622: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local from pod dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0: the server could not find the requested resource (get pods dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0) Oct 5 11:33:37.649: INFO: Lookups using dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0 failed for: [wheezy_udp@dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@dns-test-service.dns-6701.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_udp@dns-test-service.dns-6701.svc.cluster.local jessie_tcp@dns-test-service.dns-6701.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6701.svc.cluster.local] Oct 5 11:33:42.636: INFO: DNS probes using dns-6701/dns-test-768e9522-8fa9-4d12-8e6a-0959d08a70f0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:33:43.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6701" for this suite. • [SLOW TEST:37.606 seconds] [sig-network] DNS /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":303,"completed":286,"skipped":4711,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:33:43.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: create deployment with httpd image Oct 5 11:33:43.946: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f -' Oct 5 11:33:46.373: INFO: stderr: "" Oct 5 11:33:46.374: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image Oct 5 11:33:46.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config diff -f -' Oct 5 11:33:50.360: INFO: rc: 1 Oct 5 11:33:50.361: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete -f -' Oct 5 11:33:51.510: INFO: stderr: "" Oct 5 11:33:51.510: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:33:51.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3019" for this suite. • [SLOW TEST:7.740 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl diff /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:888 should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":303,"completed":287,"skipped":4731,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:33:51.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Oct 5 11:33:52.200: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8519 /api/v1/namespaces/watch-8519/configmaps/e2e-watch-test-resource-version 871fa8be-1aa0-498a-a5a0-5c919fbb04b7 3184572 0 2020-10-05 11:33:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-05 11:33:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 11:33:52.201: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8519 /api/v1/namespaces/watch-8519/configmaps/e2e-watch-test-resource-version 871fa8be-1aa0-498a-a5a0-5c919fbb04b7 3184574 0 2020-10-05 11:33:51 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-10-05 11:33:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:33:52.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8519" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":303,"completed":288,"skipped":4735,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:33:52.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:181 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 Oct 5 11:33:52.313: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:33:56.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1064" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":303,"completed":289,"skipped":4746,"failed":0} ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:33:56.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name configmap-test-volume-map-de86f971-4c1d-4919-8d26-ea524822897a STEP: Creating a pod to test consume configMaps Oct 5 11:33:56.742: INFO: Waiting up to 5m0s for pod "pod-configmaps-43c7c5c3-0f6b-49cd-bab5-d1c580a52331" in namespace "configmap-9997" to be "Succeeded or Failed" Oct 5 11:33:56.807: INFO: Pod "pod-configmaps-43c7c5c3-0f6b-49cd-bab5-d1c580a52331": Phase="Pending", Reason="", readiness=false. Elapsed: 64.756755ms Oct 5 11:33:58.815: INFO: Pod "pod-configmaps-43c7c5c3-0f6b-49cd-bab5-d1c580a52331": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07268066s Oct 5 11:34:00.822: INFO: Pod "pod-configmaps-43c7c5c3-0f6b-49cd-bab5-d1c580a52331": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079767118s STEP: Saw pod success Oct 5 11:34:00.823: INFO: Pod "pod-configmaps-43c7c5c3-0f6b-49cd-bab5-d1c580a52331" satisfied condition "Succeeded or Failed" Oct 5 11:34:00.827: INFO: Trying to get logs from node kali-worker pod pod-configmaps-43c7c5c3-0f6b-49cd-bab5-d1c580a52331 container configmap-volume-test: STEP: delete the pod Oct 5 11:34:00.912: INFO: Waiting for pod pod-configmaps-43c7c5c3-0f6b-49cd-bab5-d1c580a52331 to disappear Oct 5 11:34:01.033: INFO: Pod pod-configmaps-43c7c5c3-0f6b-49cd-bab5-d1c580a52331 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:34:01.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9997" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":303,"completed":290,"skipped":4746,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:34:01.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating configMap with name projected-configmap-test-volume-29988aeb-e9f7-44cf-8864-309542a8c1a9 STEP: Creating a pod to test consume configMaps Oct 5 11:34:01.167: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c3c74c7-0cd6-428d-a689-368bfc9bda4e" in namespace "projected-3476" to be "Succeeded or Failed" Oct 5 11:34:01.180: INFO: Pod "pod-projected-configmaps-3c3c74c7-0cd6-428d-a689-368bfc9bda4e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.307644ms Oct 5 11:34:03.189: INFO: Pod "pod-projected-configmaps-3c3c74c7-0cd6-428d-a689-368bfc9bda4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022025024s Oct 5 11:34:05.197: INFO: Pod "pod-projected-configmaps-3c3c74c7-0cd6-428d-a689-368bfc9bda4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029317978s STEP: Saw pod success Oct 5 11:34:05.197: INFO: Pod "pod-projected-configmaps-3c3c74c7-0cd6-428d-a689-368bfc9bda4e" satisfied condition "Succeeded or Failed" Oct 5 11:34:05.201: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-3c3c74c7-0cd6-428d-a689-368bfc9bda4e container projected-configmap-volume-test: STEP: delete the pod Oct 5 11:34:05.249: INFO: Waiting for pod pod-projected-configmaps-3c3c74c7-0cd6-428d-a689-368bfc9bda4e to disappear Oct 5 11:34:05.297: INFO: Pod pod-projected-configmaps-3c3c74c7-0cd6-428d-a689-368bfc9bda4e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:34:05.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3476" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":303,"completed":291,"skipped":4749,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:34:05.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Oct 5 11:34:05.457: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2523 /api/v1/namespaces/watch-2523/configmaps/e2e-watch-test-label-changed 166f2ae9-5b34-43bb-8053-01d3654d747e 3184683 0 2020-10-05 11:34:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 11:34:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 11:34:05.458: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2523 /api/v1/namespaces/watch-2523/configmaps/e2e-watch-test-label-changed 166f2ae9-5b34-43bb-8053-01d3654d747e 3184684 0 2020-10-05 11:34:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 11:34:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 11:34:05.459: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2523 /api/v1/namespaces/watch-2523/configmaps/e2e-watch-test-label-changed 166f2ae9-5b34-43bb-8053-01d3654d747e 3184685 0 2020-10-05 11:34:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 11:34:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Oct 5 11:34:15.504: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2523 /api/v1/namespaces/watch-2523/configmaps/e2e-watch-test-label-changed 166f2ae9-5b34-43bb-8053-01d3654d747e 3184734 0 2020-10-05 11:34:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 11:34:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 11:34:15.505: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2523 /api/v1/namespaces/watch-2523/configmaps/e2e-watch-test-label-changed 166f2ae9-5b34-43bb-8053-01d3654d747e 3184735 0 2020-10-05 11:34:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 11:34:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Oct 5 11:34:15.506: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2523 /api/v1/namespaces/watch-2523/configmaps/e2e-watch-test-label-changed 166f2ae9-5b34-43bb-8053-01d3654d747e 3184736 0 2020-10-05 11:34:05 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-10-05 11:34:15 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:34:15.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2523" for this suite. • [SLOW TEST:10.279 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":303,"completed":292,"skipped":4764,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:34:15.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating pod liveness-031f609c-83ad-4611-8067-ac63176c75ea in namespace container-probe-591 Oct 5 11:34:19.693: INFO: Started pod liveness-031f609c-83ad-4611-8067-ac63176c75ea in namespace container-probe-591 STEP: checking the pod's current state and verifying that restartCount is present Oct 5 11:34:19.699: INFO: Initial restart count of pod liveness-031f609c-83ad-4611-8067-ac63176c75ea is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:38:21.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-591" for this suite. • [SLOW TEST:246.993 seconds] [k8s.io] Probing container /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":303,"completed":293,"skipped":4775,"failed":0} SSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:38:22.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [AfterEach] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:39:06.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5575" for this suite. • [SLOW TEST:51.722 seconds] [k8s.io] Lease /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 lease API should be available [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":303,"completed":294,"skipped":4778,"failed":0} SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:39:14.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating the pod Oct 5 11:40:29.694: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:41:12.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-413" for this suite. • [SLOW TEST:118.681 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":303,"completed":295,"skipped":4781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:41:12.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Oct 5 11:41:31.915: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Oct 5 11:41:33.995: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:37.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:39.035: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:40.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:42.048: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:44.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:46.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:48.307: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:50.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:52.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:55.109: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:57.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:41:58.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:00.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:02.468: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:04.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:06.198: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:08.002: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:10.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:12.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:14.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:16.402: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:18.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:20.542: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} Oct 5 11:42:22.000: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494892, loc:(*time.Location)(0x5d1d160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63737494891, loc:(*time.Location)(0x5d1d160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-cbccbf6bb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Oct 5 11:42:25.049: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:42:35.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9252" for this suite. STEP: Destroying namespace "webhook-9252-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:83.289 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":303,"completed":296,"skipped":4820,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:42:36.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: validating cluster-info Oct 5 11:42:38.033: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config cluster-info' Oct 5 11:43:18.930: INFO: stderr: "" Oct 5 11:43:18.930: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34561\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:34561/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:43:18.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6018" for this suite. • [SLOW TEST:42.649 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl cluster-info /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1079 should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":303,"completed":297,"skipped":4839,"failed":0} SSSSSSSSS ------------------------------ [sig-network] IngressClass API should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:43:18.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching Oct 5 11:43:20.594: INFO: starting watch STEP: patching STEP: updating Oct 5 11:43:20.609: INFO: waiting for watch events with expected annotations Oct 5 11:43:20.611: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:43:20.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-5860" for this suite. •{"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":303,"completed":298,"skipped":4848,"failed":0} SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:43:20.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:43:36.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4254" for this suite. • [SLOW TEST:15.315 seconds] [sig-apps] ReplicationController /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":303,"completed":299,"skipped":4853,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:43:36.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name secret-test-map-284408b5-10e1-4b23-8201-23284276ea90 STEP: Creating a pod to test consume secrets Oct 5 11:43:36.248: INFO: Waiting up to 5m0s for pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7" in namespace "secrets-9950" to be "Succeeded or Failed" Oct 5 11:43:36.266: INFO: Pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.708841ms Oct 5 11:43:38.273: INFO: Pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024080433s Oct 5 11:43:40.390: INFO: Pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141436368s Oct 5 11:43:42.739: INFO: Pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.490529256s Oct 5 11:43:45.498: INFO: Pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.24987789s Oct 5 11:43:47.504: INFO: Pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7": Phase="Running", Reason="", readiness=true. Elapsed: 11.25540927s Oct 5 11:43:50.812: INFO: Pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.563673198s STEP: Saw pod success Oct 5 11:43:50.813: INFO: Pod "pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7" satisfied condition "Succeeded or Failed" Oct 5 11:43:50.820: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7 container secret-volume-test: STEP: delete the pod Oct 5 11:43:52.384: INFO: Waiting for pod pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7 to disappear Oct 5 11:43:52.388: INFO: Pod pod-secrets-5961f2d6-3728-42d0-9bf3-ab8181c9c9e7 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:43:52.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9950" for this suite. • [SLOW TEST:16.333 seconds] [sig-storage] Secrets /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":303,"completed":300,"skipped":4854,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:43:52.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:256 [It] should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: creating all guestbook components Oct 5 11:43:52.562: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Oct 5 11:43:52.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4180' Oct 5 11:43:54.479: INFO: stderr: "" Oct 5 11:43:54.479: INFO: stdout: "service/agnhost-replica created\n" Oct 5 11:43:54.480: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Oct 5 11:43:54.480: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4180' Oct 5 11:43:56.640: INFO: stderr: "" Oct 5 11:43:56.640: INFO: stdout: "service/agnhost-primary created\n" Oct 5 11:43:56.641: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Oct 5 11:43:56.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4180' Oct 5 11:43:59.209: INFO: stderr: "" Oct 5 11:43:59.210: INFO: stdout: "service/frontend created\n" Oct 5 11:43:59.211: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Oct 5 11:43:59.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4180' Oct 5 11:44:01.567: INFO: stderr: "" Oct 5 11:44:01.567: INFO: stdout: "deployment.apps/frontend created\n" Oct 5 11:44:01.568: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 5 11:44:01.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4180' Oct 5 11:44:03.642: INFO: stderr: "" Oct 5 11:44:03.642: INFO: stdout: "deployment.apps/agnhost-primary created\n" Oct 5 11:44:03.644: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.20 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Oct 5 11:44:03.644: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4180' Oct 5 11:44:07.866: INFO: stderr: "" Oct 5 11:44:07.866: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app Oct 5 11:44:07.866: INFO: Waiting for all frontend pods to be Running. Oct 5 11:44:37.920: INFO: Waiting for frontend to serve content. Oct 5 11:44:37.931: INFO: Trying to add a new entry to the guestbook. Oct 5 11:44:37.941: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Oct 5 11:44:37.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4180' Oct 5 11:44:39.341: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 11:44:39.341: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources Oct 5 11:44:39.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4180' Oct 5 11:44:40.813: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 11:44:40.813: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 5 11:44:40.814: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4180' Oct 5 11:44:42.056: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 11:44:42.056: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 5 11:44:42.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4180' Oct 5 11:44:43.383: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 11:44:43.383: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Oct 5 11:44:43.384: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4180' Oct 5 11:44:44.662: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 11:44:44.662: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources Oct 5 11:44:44.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:34561 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4180' Oct 5 11:44:46.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Oct 5 11:44:46.241: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:44:46.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4180" for this suite. • [SLOW TEST:55.278 seconds] [sig-cli] Kubectl client /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:351 should create and stop a working application [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":303,"completed":301,"skipped":4879,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:44:47.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating a pod to test env composition Oct 5 11:44:49.880: INFO: Waiting up to 5m0s for pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68" in namespace "var-expansion-3322" to be "Succeeded or Failed" Oct 5 11:44:51.171: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 1.291408125s Oct 5 11:44:53.933: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053085944s Oct 5 11:44:57.405: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 7.524485953s Oct 5 11:44:59.790: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 9.909691663s Oct 5 11:45:01.858: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 11.978428959s Oct 5 11:45:04.117: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 14.236592279s Oct 5 11:45:07.928: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 18.048149068s Oct 5 11:45:11.419: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 21.538628696s Oct 5 11:45:13.495: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 23.615162529s Oct 5 11:45:15.962: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 26.082280318s Oct 5 11:45:18.027: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 28.146665452s Oct 5 11:45:20.112: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 30.231852667s Oct 5 11:45:22.358: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Pending", Reason="", readiness=false. Elapsed: 32.47820367s Oct 5 11:45:24.364: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 34.484444985s STEP: Saw pod success Oct 5 11:45:24.365: INFO: Pod "var-expansion-84625188-0358-4c4d-9733-b43dbca27e68" satisfied condition "Succeeded or Failed" Oct 5 11:45:24.374: INFO: Trying to get logs from node kali-worker2 pod var-expansion-84625188-0358-4c4d-9733-b43dbca27e68 container dapi-container: STEP: delete the pod Oct 5 11:45:24.464: INFO: Waiting for pod var-expansion-84625188-0358-4c4d-9733-b43dbca27e68 to disappear Oct 5 11:45:24.469: INFO: Pod var-expansion-84625188-0358-4c4d-9733-b43dbca27e68 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:45:24.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3322" for this suite. • [SLOW TEST:36.741 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:592 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":303,"completed":302,"skipped":4903,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Oct 5 11:45:24.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 STEP: Creating secret with name projected-secret-test-de27940b-cfa0-4169-afb7-84adcf9af555 STEP: Creating a pod to test consume secrets Oct 5 11:45:24.682: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5" in namespace "projected-4618" to be "Succeeded or Failed" Oct 5 11:45:24.739: INFO: Pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5": Phase="Pending", Reason="", readiness=false. Elapsed: 56.982873ms Oct 5 11:45:26.747: INFO: Pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064712597s Oct 5 11:45:29.045: INFO: Pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.362992979s Oct 5 11:45:32.146: INFO: Pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.464584563s Oct 5 11:45:35.325: INFO: Pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5": Phase="Running", Reason="", readiness=true. Elapsed: 10.642941582s Oct 5 11:45:37.637: INFO: Pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5": Phase="Running", Reason="", readiness=true. Elapsed: 12.954928724s Oct 5 11:45:39.643: INFO: Pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.960945266s STEP: Saw pod success Oct 5 11:45:39.643: INFO: Pod "pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5" satisfied condition "Succeeded or Failed" Oct 5 11:45:40.846: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5 container secret-volume-test: STEP: delete the pod Oct 5 11:45:41.737: INFO: Waiting for pod pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5 to disappear Oct 5 11:45:41.889: INFO: Pod pod-projected-secrets-0395bc9e-29d2-4fae-a112-94a1919453c5 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Oct 5 11:45:41.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4618" for this suite. • [SLOW TEST:17.421 seconds] [sig-storage] Projected secret /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:597 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":303,"completed":303,"skipped":4914,"failed":0} SSSSSSSSSSSSSSSOct 5 11:45:41.903: INFO: Running AfterSuite actions on all nodes Oct 5 11:45:41.904: INFO: Running AfterSuite actions on node 1 Oct 5 11:45:41.904: INFO: Skipping dumping logs from cluster JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml {"msg":"Test Suite completed","total":303,"completed":303,"skipped":4929,"failed":0} Ran 303 of 5232 Specs in 7517.391 seconds SUCCESS! -- 303 Passed | 0 Failed | 0 Pending | 4929 Skipped PASS