Running Suite: Kubernetes e2e suite =================================== Random Seed: 1651874807 - Will randomize all specs Will run 5773 specs Running in parallel across 10 nodes May 6 22:06:49.460: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.465: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 22:06:49.491: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 22:06:49.558: INFO: The status of Pod cmk-init-discover-node1-tp69t is Succeeded, skipping waiting May 6 22:06:49.558: INFO: The status of Pod cmk-init-discover-node2-kt2nj is Succeeded, skipping waiting May 6 22:06:49.558: INFO: 40 / 42 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 22:06:49.558: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready. May 6 22:06:49.558: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 22:06:49.577: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'cmk' (0 seconds elapsed) May 6 22:06:49.577: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-flannel' (0 seconds elapsed) May 6 22:06:49.577: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm' (0 seconds elapsed) May 6 22:06:49.577: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-arm64' (0 seconds elapsed) May 6 22:06:49.577: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-ppc64le' (0 seconds elapsed) May 6 22:06:49.577: INFO: 0 / 0 pods ready in namespace 'kube-system' in daemonset 'kube-flannel-ds-s390x' (0 seconds elapsed) May 6 22:06:49.577: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-multus-ds-amd64' (0 seconds elapsed) May 6 22:06:49.577: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 22:06:49.577: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'node-feature-discovery-worker' (0 seconds elapsed) May 6 22:06:49.577: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'sriov-net-dp-kube-sriov-device-plugin-amd64' (0 seconds elapsed) May 6 22:06:49.577: INFO: e2e test version: v1.21.9 May 6 22:06:49.578: INFO: kube-apiserver version: v1.21.1 May 6 22:06:49.579: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.585: INFO: Cluster IP family: ipv4 May 6 22:06:49.585: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.607: INFO: Cluster IP family: ipv4 May 6 22:06:49.593: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.615: INFO: Cluster IP family: ipv4 May 6 22:06:49.593: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.616: INFO: Cluster IP family: ipv4 May 6 22:06:49.599: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.619: INFO: Cluster IP family: ipv4 May 6 22:06:49.604: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.622: INFO: Cluster IP family: ipv4 May 6 22:06:49.607: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.628: INFO: Cluster IP family: ipv4 May 6 22:06:49.626: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.646: INFO: Cluster IP family: ipv4 May 6 22:06:49.627: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.647: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 6 22:06:49.654: INFO: >>> kubeConfig: /root/.kube/config May 6 22:06:49.682: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch W0506 22:06:49.704484 32 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.704: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.706: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting a starting resourceVersion STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:06:55.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7210" for this suite. • [SLOW TEST:5.608 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":1,"skipped":28,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook W0506 22:06:49.692180 29 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.692: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.694: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 6 22:06:50.127: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:06:50.138: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:06:52.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:06:54.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:06:56.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:06:58.151: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:07:00.152: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471610, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:07:03.160: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:03.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9145" for this suite. STEP: Destroying namespace "webhook-9145-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.574 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:03.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:03.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4249" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":2,"skipped":40,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0506 22:06:49.686111 31 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.686: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.687: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-1b9a79e7-4c2e-40ee-9631-01e542a3d2eb STEP: Creating a pod to test consume configMaps May 6 22:06:49.704: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b" in namespace "projected-3182" to be "Succeeded or Failed" May 6 22:06:49.706: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155383ms May 6 22:06:51.708: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004759225s May 6 22:06:53.712: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008492319s May 6 22:06:55.719: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015658842s May 6 22:06:57.723: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018919092s May 6 22:06:59.725: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.021355496s May 6 22:07:01.729: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.025615427s May 6 22:07:03.733: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.029041043s STEP: Saw pod success May 6 22:07:03.733: INFO: Pod "pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b" satisfied condition "Succeeded or Failed" May 6 22:07:03.736: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b container agnhost-container: STEP: delete the pod May 6 22:07:03.815: INFO: Waiting for pod pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b to disappear May 6 22:07:03.817: INFO: Pod pod-projected-configmaps-1ebad6d8-6898-4326-88ef-1d847508865b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:03.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3182" for this suite. • [SLOW TEST:14.173 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment W0506 22:06:49.682139 25 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.682: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.684: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:06:49.686: INFO: Creating deployment "webserver-deployment" May 6 22:06:49.690: INFO: Waiting for observed generation 1 May 6 22:06:51.695: INFO: Waiting for all required pods to come up May 6 22:06:51.699: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 6 22:07:03.705: INFO: Waiting for deployment "webserver-deployment" to complete May 6 22:07:03.710: INFO: Updating deployment "webserver-deployment" with a non-existent image May 6 22:07:03.717: INFO: Updating deployment webserver-deployment May 6 22:07:03.717: INFO: Waiting for observed generation 2 May 6 22:07:05.723: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 6 22:07:05.726: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 6 22:07:05.730: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 6 22:07:05.739: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 6 22:07:05.739: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 6 22:07:05.741: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 6 22:07:05.747: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 6 22:07:05.747: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 6 22:07:05.754: INFO: Updating deployment webserver-deployment May 6 22:07:05.754: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 6 22:07:05.758: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 6 22:07:05.761: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 6 22:07:05.767: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-2947 2cecc44f-06b5-48d0-8795-446a8ecdfee4 31877 3 2022-05-06 22:06:49 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002bc30b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2022-05-06 22:07:03 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-05-06 22:07:05 +0000 UTC,LastTransitionTime:2022-05-06 22:07:05 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 6 22:07:05.771: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-2947 25dc5a37-0739-477c-875d-51bbbe892da8 31874 3 2022-05-06 22:07:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2cecc44f-06b5-48d0-8795-446a8ecdfee4 0xc001ffe777 0xc001ffe778}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cecc44f-06b5-48d0-8795-446a8ecdfee4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001ffe7f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 22:07:05.771: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 6 22:07:05.771: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-2947 a06ba38b-ebed-4ef5-8643-556b436d02bd 31872 3 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2cecc44f-06b5-48d0-8795-446a8ecdfee4 0xc001ffe857 0xc001ffe858}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:06:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2cecc44f-06b5-48d0-8795-446a8ecdfee4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001ffe8c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 6 22:07:05.777: INFO: Pod "webserver-deployment-795d758f88-89n2q" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-89n2q webserver-deployment-795d758f88- deployment-2947 f1e67664-75f1-4ed4-b424-b090473cb122 31889 0 2022-05-06 22:07:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 25dc5a37-0739-477c-875d-51bbbe892da8 0xc0039c4bff 0xc0039c4c10}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25dc5a37-0739-477c-875d-51bbbe892da8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-df2f4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-df2f4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.778: INFO: Pod "webserver-deployment-795d758f88-b6dl9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-b6dl9 webserver-deployment-795d758f88- deployment-2947 daaa3de5-1784-440e-bdee-c09038228367 31833 0 2022-05-06 22:07:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 25dc5a37-0739-477c-875d-51bbbe892da8 0xc0039c4d4f 0xc0039c4d60}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25dc5a37-0739-477c-875d-51bbbe892da8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x4wdm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x4wdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-06 22:07:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.778: INFO: Pod "webserver-deployment-795d758f88-l7tk2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-l7tk2 webserver-deployment-795d758f88- deployment-2947 5c98d066-eca7-4e6c-92f6-47fdaa79eb64 31831 0 2022-05-06 22:07:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 25dc5a37-0739-477c-875d-51bbbe892da8 0xc0039c4f2f 0xc0039c4f40}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25dc5a37-0739-477c-875d-51bbbe892da8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-55mjq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-55mjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-06 22:07:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.778: INFO: Pod "webserver-deployment-795d758f88-mb9qs" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-mb9qs webserver-deployment-795d758f88- deployment-2947 bb70697e-c28f-4f3c-b7d4-9b5c48b4f1b7 31883 0 2022-05-06 22:07:05 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 25dc5a37-0739-477c-875d-51bbbe892da8 0xc0039c510f 0xc0039c5120}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25dc5a37-0739-477c-875d-51bbbe892da8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-866xh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-866xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.779: INFO: Pod "webserver-deployment-795d758f88-s4dhl" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-s4dhl webserver-deployment-795d758f88- deployment-2947 851f4b3d-20ad-4ea6-95e9-a9ede2f0e626 31850 0 2022-05-06 22:07:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 25dc5a37-0739-477c-875d-51bbbe892da8 0xc0039c528f 0xc0039c52a0}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25dc5a37-0739-477c-875d-51bbbe892da8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-06 22:07:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w8wj5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w8wj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-05-06 22:07:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.779: INFO: Pod "webserver-deployment-795d758f88-sjvkp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-sjvkp webserver-deployment-795d758f88- deployment-2947 ae47d2f2-b2ad-44e7-ab60-cd71a546452a 31809 0 2022-05-06 22:07:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 25dc5a37-0739-477c-875d-51bbbe892da8 0xc0039c546f 0xc0039c5480}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25dc5a37-0739-477c-875d-51bbbe892da8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dzl67,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dzl67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:,StartTime:2022-05-06 22:07:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.779: INFO: Pod "webserver-deployment-795d758f88-v4hc2" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-v4hc2 webserver-deployment-795d758f88- deployment-2947 59d25879-0a03-4707-9460-8edd7d91b448 31818 0 2022-05-06 22:07:03 +0000 UTC map[name:httpd pod-template-hash:795d758f88] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 25dc5a37-0739-477c-875d-51bbbe892da8 0xc0039c564f 0xc0039c5660}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"25dc5a37-0739-477c-875d-51bbbe892da8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-06 22:07:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bfxmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfxmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-06 22:07:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.780: INFO: Pod "webserver-deployment-847dcfb7fb-29q6p" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-29q6p webserver-deployment-847dcfb7fb- deployment-2947 4dbc2d7d-1a6b-4ccb-a527-3419f1ad64d5 31687 0 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.165" ], "mac": "96:b5:f4:d4:ee:6c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.165" ], "mac": "96:b5:f4:d4:ee:6c", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc0039c582f 0xc0039c5840}] [] [{kube-controller-manager Update v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:07:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.165\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5xl6b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xl6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.165,StartTime:2022-05-06 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:07:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://f307300f9035ef62b9aae975dedf6c65e1b44a94a4109737d5f302f97538df1c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.780: INFO: Pod "webserver-deployment-847dcfb7fb-6lgdp" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-6lgdp webserver-deployment-847dcfb7fb- deployment-2947 343d0ef0-d744-4885-bfb7-d68c512c5c60 31714 0 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.202" ], "mac": "c6:3d:96:3c:fd:a7", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.202" ], "mac": "c6:3d:96:3c:fd:a7", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc0039c5a2f 0xc0039c5a40}] [] [{kube-controller-manager Update v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:07:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.202\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cvq97,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cvq97,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.202,StartTime:2022-05-06 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:07:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://53fbfa248980d356e4a5e4110f6f6256c893b07ac3b1001609ceb509473fdf1f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.202,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.780: INFO: Pod "webserver-deployment-847dcfb7fb-9ckr2" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-9ckr2 webserver-deployment-847dcfb7fb- deployment-2947 c0fb0f30-8267-4000-a349-84da611f58c7 31717 0 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.200" ], "mac": "a2:ff:12:97:c5:a8", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.200" ], "mac": "a2:ff:12:97:c5:a8", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc0039c5c2f 0xc0039c5c40}] [] [{kube-controller-manager Update v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:06:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:07:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.200\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jgbbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgbbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.200,StartTime:2022-05-06 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:07:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://853a08a9983d383d21636e3c14f98b9028ddbb4e22207fd2a5783079b19b0062,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.200,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.781: INFO: Pod "webserver-deployment-847dcfb7fb-bcrcb" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-bcrcb webserver-deployment-847dcfb7fb- deployment-2947 bdbb5006-334c-4523-ae41-4ce4cfe070a2 31886 0 2022-05-06 22:07:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc0039c5e2f 0xc0039c5e40}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-76gxg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-76gxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.781: INFO: Pod "webserver-deployment-847dcfb7fb-c982q" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-c982q webserver-deployment-847dcfb7fb- deployment-2947 9a162ae7-c942-4d68-920a-9787188403fe 31711 0 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.201" ], "mac": "92:1d:31:08:22:06", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.201" ], "mac": "92:1d:31:08:22:06", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc0039c5f6f 0xc0039c5f80}] [] [{kube-controller-manager Update v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:06:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:07:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.201\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v5vwz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5vwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.201,StartTime:2022-05-06 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:07:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://d1baae7c7287a072be885a15e870957023c4a6139a050e02a46b7d7b611bd180,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.781: INFO: Pod "webserver-deployment-847dcfb7fb-hnthz" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-hnthz webserver-deployment-847dcfb7fb- deployment-2947 da0fa933-f235-4cdd-aa05-de68759df036 31879 0 2022-05-06 22:07:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc004f0634f 0xc004f06360}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v4dsx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v4dsx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.782: INFO: Pod "webserver-deployment-847dcfb7fb-jh7xc" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jh7xc webserver-deployment-847dcfb7fb- deployment-2947 55f090ab-61e9-4b33-baba-61109232d9a0 31694 0 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.164" ], "mac": "56:cf:fc:39:16:87", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.164" ], "mac": "56:cf:fc:39:16:87", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc004f064bf 0xc004f064d0}] [] [{kube-controller-manager Update v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:07:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.164\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pjnzm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjnzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.164,StartTime:2022-05-06 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:07:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://4a09e74cb7d65164fb360d54fe764e2e205418d47ecaec9081d9f28d27aac272,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.782: INFO: Pod "webserver-deployment-847dcfb7fb-jpms6" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-jpms6 webserver-deployment-847dcfb7fb- deployment-2947 a91144b9-6012-4863-9eb5-d1f5578470df 31690 0 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.163" ], "mac": "ba:89:b8:b5:1d:e2", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.163" ], "mac": "ba:89:b8:b5:1d:e2", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc004f066bf 0xc004f066d0}] [] [{kube-controller-manager Update v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:07:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.163\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vjv6z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjv6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.163,StartTime:2022-05-06 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:06:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://54a210233d47fb8447f6f9df22555d4a433c2c75e40473bab9a6fa2c39e4e352,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.782: INFO: Pod "webserver-deployment-847dcfb7fb-ks9q5" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-ks9q5 webserver-deployment-847dcfb7fb- deployment-2947 bb1a5ce3-7d73-46ab-b77f-d608472cc235 31887 0 2022-05-06 22:07:05 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc004f068bf 0xc004f068d0}] [] [{kube-controller-manager Update v1 2022-05-06 22:07:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mvfmx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvfmx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:07:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.783: INFO: Pod "webserver-deployment-847dcfb7fb-pwj6d" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-pwj6d webserver-deployment-847dcfb7fb- deployment-2947 6709e240-2e9d-4775-b6c9-5209246aec7e 31651 0 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.161" ], "mac": "62:0c:76:ea:42:4b", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.161" ], "mac": "62:0c:76:ea:42:4b", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc004f06a2f 0xc004f06a40}] [] [{kube-controller-manager Update v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:06:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:06:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.161\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2x7qt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2x7qt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.161,StartTime:2022-05-06 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:06:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://2fa1faf9b0dd45b2c1ea4eb4c95aec6e7f6ea1f92e73cee575d8a44cffc59cf3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:07:05.783: INFO: Pod "webserver-deployment-847dcfb7fb-qkdg6" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qkdg6 webserver-deployment-847dcfb7fb- deployment-2947 37ccc604-abfc-4ac7-884b-76abe90f03d4 31649 0 2022-05-06 22:06:49 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.162" ], "mac": "e2:0a:8f:80:ac:31", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.162" ], "mac": "e2:0a:8f:80:ac:31", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb a06ba38b-ebed-4ef5-8643-556b436d02bd 0xc004f06c2f 0xc004f06c40}] [] [{kube-controller-manager Update v1 2022-05-06 22:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a06ba38b-ebed-4ef5-8643-556b436d02bd\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:06:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:06:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.3.162\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7sc7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7sc7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:06:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:10.244.3.162,StartTime:2022-05-06 22:06:49 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:06:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://289e60e9dfd131a3d68aad11f6a3e3f5bc6d8a2d496bf436f9ab97ea26c01c91,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.3.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:05.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2947" for this suite. • [SLOW TEST:16.147 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota W0506 22:06:49.677114 30 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.677: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.680: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:06.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7015" for this suite. • [SLOW TEST:17.090 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook W0506 22:06:49.713733 38 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.713: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.715: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:06:49.972: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:06:51.982: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:06:53.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:06:55.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:06:57.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:06:59.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:07:01.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:07:03.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471609, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:07:06.993: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:07:06.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4213-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:14.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8268" for this suite. STEP: Destroying namespace "webhook-8268-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:24.925 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:03.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 6 22:07:03.405: INFO: Waiting up to 5m0s for pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035" in namespace "downward-api-7322" to be "Succeeded or Failed" May 6 22:07:03.408: INFO: Pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035": Phase="Pending", Reason="", readiness=false. Elapsed: 2.294749ms May 6 22:07:05.413: INFO: Pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007352324s May 6 22:07:07.417: INFO: Pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012198764s May 6 22:07:09.422: INFO: Pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016288206s May 6 22:07:11.427: INFO: Pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035": Phase="Pending", Reason="", readiness=false. Elapsed: 8.02202226s May 6 22:07:13.430: INFO: Pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02517445s May 6 22:07:15.436: INFO: Pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.030242775s STEP: Saw pod success May 6 22:07:15.436: INFO: Pod "downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035" satisfied condition "Succeeded or Failed" May 6 22:07:15.438: INFO: Trying to get logs from node node1 pod downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035 container dapi-container: STEP: delete the pod May 6 22:07:15.824: INFO: Waiting for pod downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035 to disappear May 6 22:07:15.826: INFO: Pod downward-api-20e37ac0-1f9e-4c8e-858d-52153ea64035 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:15.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7322" for this suite. • [SLOW TEST:12.463 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":48,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0} [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:05.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-projected-all-test-volume-c2dd826c-5aec-4941-91d3-57ef271f8d37 STEP: Creating secret with name secret-projected-all-test-volume-8f55f7e7-eac1-465b-9c9c-7610e0a503b6 STEP: Creating a pod to test Check all projections for projected volume plugin May 6 22:07:05.841: INFO: Waiting up to 5m0s for pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc" in namespace "projected-7785" to be "Succeeded or Failed" May 6 22:07:05.843: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15248ms May 6 22:07:07.847: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005614878s May 6 22:07:09.851: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009438335s May 6 22:07:11.856: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014338381s May 6 22:07:13.860: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018535116s May 6 22:07:15.865: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.02376241s May 6 22:07:17.870: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.028815863s May 6 22:07:19.874: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.032475666s STEP: Saw pod success May 6 22:07:19.874: INFO: Pod "projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc" satisfied condition "Succeeded or Failed" May 6 22:07:19.878: INFO: Trying to get logs from node node1 pod projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc container projected-all-volume-test: STEP: delete the pod May 6 22:07:19.892: INFO: Waiting for pod projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc to disappear May 6 22:07:19.894: INFO: Pod projected-volume-0831f5bd-1eef-4f4c-b780-abc25fec4dbc no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:19.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7785" for this suite. • [SLOW TEST:14.102 seconds] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:15.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-deebb271-418b-492f-85e3-c6eeefe1dafa STEP: Creating a pod to test consume configMaps May 6 22:07:15.931: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c1f7816-5693-4f7b-94de-10084a3ba755" in namespace "configmap-4300" to be "Succeeded or Failed" May 6 22:07:15.933: INFO: Pod "pod-configmaps-6c1f7816-5693-4f7b-94de-10084a3ba755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340556ms May 6 22:07:17.937: INFO: Pod "pod-configmaps-6c1f7816-5693-4f7b-94de-10084a3ba755": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00582748s May 6 22:07:19.943: INFO: Pod "pod-configmaps-6c1f7816-5693-4f7b-94de-10084a3ba755": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011775038s STEP: Saw pod success May 6 22:07:19.943: INFO: Pod "pod-configmaps-6c1f7816-5693-4f7b-94de-10084a3ba755" satisfied condition "Succeeded or Failed" May 6 22:07:19.949: INFO: Trying to get logs from node node1 pod pod-configmaps-6c1f7816-5693-4f7b-94de-10084a3ba755 container agnhost-container: STEP: delete the pod May 6 22:07:19.972: INFO: Waiting for pod pod-configmaps-6c1f7816-5693-4f7b-94de-10084a3ba755 to disappear May 6 22:07:19.974: INFO: Pod pod-configmaps-6c1f7816-5693-4f7b-94de-10084a3ba755 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:19.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4300" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":73,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:03.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ReplicationController STEP: waiting for RC to be added STEP: waiting for available Replicas STEP: patching ReplicationController STEP: waiting for RC to be modified STEP: patching ReplicationController status STEP: waiting for RC to be modified STEP: waiting for available Replicas STEP: fetching ReplicationController status STEP: patching ReplicationController scale STEP: waiting for RC to be modified STEP: waiting for ReplicationController's scale to be the max amount STEP: fetching ReplicationController; ensuring that it's patched STEP: updating ReplicationController status STEP: waiting for RC to be modified STEP: listing all ReplicationControllers STEP: checking that ReplicationController has expected values STEP: deleting ReplicationControllers by collection STEP: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:20.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8372" for this suite. • [SLOW TEST:16.792 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:20.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:20.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2744" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":3,"skipped":28,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:20.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 6 22:07:20.777: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6363 efbaca9a-1d20-4b39-9276-7041ade1bdc2 32461 0 2022-05-06 22:07:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-06 22:07:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:07:20.777: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6363 efbaca9a-1d20-4b39-9276-7041ade1bdc2 32462 0 2022-05-06 22:07:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-06 22:07:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 6 22:07:20.787: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6363 efbaca9a-1d20-4b39-9276-7041ade1bdc2 32463 0 2022-05-06 22:07:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-06 22:07:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:07:20.789: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6363 efbaca9a-1d20-4b39-9276-7041ade1bdc2 32464 0 2022-05-06 22:07:20 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-05-06 22:07:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:20.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6363" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":4,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:19.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics May 6 22:07:21.048: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 6 22:07:21.176: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:21.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6308" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":3,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0506 22:06:49.634891 27 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.635: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.638: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-993dd8e1-b11e-4e2a-9a83-37c4b7a4a88e in namespace container-probe-8237 May 6 22:06:59.665: INFO: Started pod liveness-993dd8e1-b11e-4e2a-9a83-37c4b7a4a88e in namespace container-probe-8237 STEP: checking the pod's current state and verifying that restartCount is present May 6 22:06:59.668: INFO: Initial restart count of pod liveness-993dd8e1-b11e-4e2a-9a83-37c4b7a4a88e is 0 May 6 22:07:23.725: INFO: Restart count of pod container-probe-8237/liveness-993dd8e1-b11e-4e2a-9a83-37c4b7a4a88e is now 1 (24.057411273s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:23.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8237" for this suite. • [SLOW TEST:34.142 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:19.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override all May 6 22:07:20.020: INFO: Waiting up to 5m0s for pod "client-containers-8df96565-4acd-402b-adbf-cd5608e20893" in namespace "containers-2148" to be "Succeeded or Failed" May 6 22:07:20.024: INFO: Pod "client-containers-8df96565-4acd-402b-adbf-cd5608e20893": Phase="Pending", Reason="", readiness=false. Elapsed: 3.309144ms May 6 22:07:22.027: INFO: Pod "client-containers-8df96565-4acd-402b-adbf-cd5608e20893": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006829856s May 6 22:07:24.031: INFO: Pod "client-containers-8df96565-4acd-402b-adbf-cd5608e20893": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010152187s STEP: Saw pod success May 6 22:07:24.031: INFO: Pod "client-containers-8df96565-4acd-402b-adbf-cd5608e20893" satisfied condition "Succeeded or Failed" May 6 22:07:24.032: INFO: Trying to get logs from node node1 pod client-containers-8df96565-4acd-402b-adbf-cd5608e20893 container agnhost-container: STEP: delete the pod May 6 22:07:24.049: INFO: Waiting for pod client-containers-8df96565-4acd-402b-adbf-cd5608e20893 to disappear May 6 22:07:24.053: INFO: Pod client-containers-8df96565-4acd-402b-adbf-cd5608e20893 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:24.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2148" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:21.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 6 22:07:21.284: INFO: Waiting up to 5m0s for pod "downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684" in namespace "downward-api-1253" to be "Succeeded or Failed" May 6 22:07:21.286: INFO: Pod "downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149469ms May 6 22:07:23.290: INFO: Pod "downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005551078s May 6 22:07:25.295: INFO: Pod "downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010901886s May 6 22:07:27.300: INFO: Pod "downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016087696s STEP: Saw pod success May 6 22:07:27.300: INFO: Pod "downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684" satisfied condition "Succeeded or Failed" May 6 22:07:27.304: INFO: Trying to get logs from node node2 pod downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684 container dapi-container: STEP: delete the pod May 6 22:07:27.318: INFO: Waiting for pod downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684 to disappear May 6 22:07:27.320: INFO: Pod downward-api-7e5ca2c6-87f7-48da-99f4-2b4b2f082684 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:27.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1253" for this suite. • [SLOW TEST:6.095 seconds] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0} S ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:55.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 6 22:06:55.341: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:06:57.345: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:06:59.346: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:01.344: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:03.344: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:05.345: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:07.348: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 6 22:07:07.364: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:09.369: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:11.369: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:13.368: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:15.378: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:17.369: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:19.371: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 22:07:19.392: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 22:07:19.395: INFO: Pod pod-with-poststart-exec-hook still exists May 6 22:07:21.396: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 22:07:21.399: INFO: Pod pod-with-poststart-exec-hook still exists May 6 22:07:23.396: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 22:07:23.398: INFO: Pod pod-with-poststart-exec-hook still exists May 6 22:07:25.395: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 22:07:25.399: INFO: Pod pod-with-poststart-exec-hook still exists May 6 22:07:27.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 22:07:27.399: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:27.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4656" for this suite. • [SLOW TEST:32.102 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":33,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:27.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename runtimeclass STEP: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/node.k8s.io STEP: getting /apis/node.k8s.io/v1 STEP: creating STEP: watching May 6 22:07:27.370: INFO: starting watch STEP: getting STEP: listing STEP: patching STEP: updating May 6 22:07:27.387: INFO: waiting for watch events with expected annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:27.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "runtimeclass-1108" for this suite. •S ------------------------------ {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":5,"skipped":37,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:24.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:07:24.203: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2beb7767-80ca-407e-868c-fd8a7d596012" in namespace "projected-9234" to be "Succeeded or Failed" May 6 22:07:24.205: INFO: Pod "downwardapi-volume-2beb7767-80ca-407e-868c-fd8a7d596012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409132ms May 6 22:07:26.208: INFO: Pod "downwardapi-volume-2beb7767-80ca-407e-868c-fd8a7d596012": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004781172s May 6 22:07:28.210: INFO: Pod "downwardapi-volume-2beb7767-80ca-407e-868c-fd8a7d596012": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007602989s STEP: Saw pod success May 6 22:07:28.210: INFO: Pod "downwardapi-volume-2beb7767-80ca-407e-868c-fd8a7d596012" satisfied condition "Succeeded or Failed" May 6 22:07:28.213: INFO: Trying to get logs from node node1 pod downwardapi-volume-2beb7767-80ca-407e-868c-fd8a7d596012 container client-container: STEP: delete the pod May 6 22:07:28.225: INFO: Waiting for pod downwardapi-volume-2beb7767-80ca-407e-868c-fd8a7d596012 to disappear May 6 22:07:28.227: INFO: Pod downwardapi-volume-2beb7767-80ca-407e-868c-fd8a7d596012 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:28.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9234" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":122,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:27.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption is created May 6 22:07:27.466: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:29.469: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:31.470: INFO: The status of Pod pod-adoption is Running (Ready = true) STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:32.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-394" for this suite. • [SLOW TEST:5.057 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":3,"skipped":43,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:32.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:36.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5886" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":55,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:27.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:40.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9567" for this suite. • [SLOW TEST:13.102 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":6,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:06.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-5185 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 22:07:06.762: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 6 22:07:06.793: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:08.798: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:10.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:12.800: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:14.796: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:16.800: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:18.799: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:20.797: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:07:22.802: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:07:24.797: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:07:26.797: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:07:28.798: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:07:30.798: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:07:32.799: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:07:34.797: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:07:36.797: INFO: The status of Pod netserver-0 is Running (Ready = true) May 6 22:07:36.802: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 6 22:07:40.843: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 6 22:07:40.843: INFO: Going to poll 10.244.3.173 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 6 22:07:40.847: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.3.173:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5185 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:07:40.848: INFO: >>> kubeConfig: /root/.kube/config May 6 22:07:40.939: INFO: Found all 1 expected endpoints: [netserver-0] May 6 22:07:40.939: INFO: Going to poll 10.244.4.211 on port 8080 at least 0 times, with a maximum of 34 tries before failing May 6 22:07:40.941: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.4.211:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5185 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:07:40.941: INFO: >>> kubeConfig: /root/.kube/config May 6 22:07:41.031: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:41.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5185" for this suite. • [SLOW TEST:34.299 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":13,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:40.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating secret secrets-7639/secret-test-48b411ad-bcf8-485c-8a26-28d6605dcc3c STEP: Creating a pod to test consume secrets May 6 22:07:40.669: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc4de854-5f31-4655-bbdf-fc518a939d1c" in namespace "secrets-7639" to be "Succeeded or Failed" May 6 22:07:40.674: INFO: Pod "pod-configmaps-dc4de854-5f31-4655-bbdf-fc518a939d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.936291ms May 6 22:07:42.678: INFO: Pod "pod-configmaps-dc4de854-5f31-4655-bbdf-fc518a939d1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009151721s May 6 22:07:44.681: INFO: Pod "pod-configmaps-dc4de854-5f31-4655-bbdf-fc518a939d1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011952867s STEP: Saw pod success May 6 22:07:44.681: INFO: Pod "pod-configmaps-dc4de854-5f31-4655-bbdf-fc518a939d1c" satisfied condition "Succeeded or Failed" May 6 22:07:44.684: INFO: Trying to get logs from node node1 pod pod-configmaps-dc4de854-5f31-4655-bbdf-fc518a939d1c container env-test: STEP: delete the pod May 6 22:07:44.696: INFO: Waiting for pod pod-configmaps-dc4de854-5f31-4655-bbdf-fc518a939d1c to disappear May 6 22:07:44.698: INFO: Pod pod-configmaps-dc4de854-5f31-4655-bbdf-fc518a939d1c no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:44.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7639" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":87,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:36.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Updating PodDisruptionBudget status STEP: Waiting for all pods to be running May 6 22:07:38.677: INFO: running pods: 0 < 1 May 6 22:07:40.684: INFO: running pods: 0 < 1 May 6 22:07:42.681: INFO: running pods: 0 < 1 STEP: locating a running pod STEP: Waiting for the pdb to be processed STEP: Patching PodDisruptionBudget status STEP: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:44.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-5894" for this suite. • [SLOW TEST:8.082 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":5,"skipped":81,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:28.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9382 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9382 STEP: creating replication controller externalsvc in namespace services-9382 I0506 22:07:28.303036 29 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9382, replica count: 2 I0506 22:07:31.354837 29 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:07:34.357067 29 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 6 22:07:34.372: INFO: Creating new exec pod May 6 22:07:38.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9382 exec execpod9pwdg -- /bin/sh -x -c nslookup clusterip-service.services-9382.svc.cluster.local' May 6 22:07:39.037: INFO: stderr: "+ nslookup clusterip-service.services-9382.svc.cluster.local\n" May 6 22:07:39.037: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nclusterip-service.services-9382.svc.cluster.local\tcanonical name = externalsvc.services-9382.svc.cluster.local.\nName:\texternalsvc.services-9382.svc.cluster.local\nAddress: 10.233.27.192\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9382, will wait for the garbage collector to delete the pods May 6 22:07:39.096: INFO: Deleting ReplicationController externalsvc took: 6.555135ms May 6 22:07:39.197: INFO: Terminating ReplicationController externalsvc pods took: 101.014286ms May 6 22:07:46.808: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:46.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9382" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:18.562 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":7,"skipped":131,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:46.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename discovery STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 STEP: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:07:47.122: INFO: Checking APIGroup: apiregistration.k8s.io May 6 22:07:47.123: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 May 6 22:07:47.123: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.123: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 May 6 22:07:47.123: INFO: Checking APIGroup: apps May 6 22:07:47.124: INFO: PreferredVersion.GroupVersion: apps/v1 May 6 22:07:47.124: INFO: Versions found [{apps/v1 v1}] May 6 22:07:47.124: INFO: apps/v1 matches apps/v1 May 6 22:07:47.124: INFO: Checking APIGroup: events.k8s.io May 6 22:07:47.126: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 May 6 22:07:47.126: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.126: INFO: events.k8s.io/v1 matches events.k8s.io/v1 May 6 22:07:47.126: INFO: Checking APIGroup: authentication.k8s.io May 6 22:07:47.127: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 May 6 22:07:47.127: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.127: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 May 6 22:07:47.127: INFO: Checking APIGroup: authorization.k8s.io May 6 22:07:47.128: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 May 6 22:07:47.128: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.128: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 May 6 22:07:47.128: INFO: Checking APIGroup: autoscaling May 6 22:07:47.129: INFO: PreferredVersion.GroupVersion: autoscaling/v1 May 6 22:07:47.129: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] May 6 22:07:47.129: INFO: autoscaling/v1 matches autoscaling/v1 May 6 22:07:47.129: INFO: Checking APIGroup: batch May 6 22:07:47.130: INFO: PreferredVersion.GroupVersion: batch/v1 May 6 22:07:47.130: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] May 6 22:07:47.130: INFO: batch/v1 matches batch/v1 May 6 22:07:47.130: INFO: Checking APIGroup: certificates.k8s.io May 6 22:07:47.131: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 May 6 22:07:47.131: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.131: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 May 6 22:07:47.131: INFO: Checking APIGroup: networking.k8s.io May 6 22:07:47.131: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 May 6 22:07:47.131: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.131: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 May 6 22:07:47.131: INFO: Checking APIGroup: extensions May 6 22:07:47.132: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 May 6 22:07:47.132: INFO: Versions found [{extensions/v1beta1 v1beta1}] May 6 22:07:47.132: INFO: extensions/v1beta1 matches extensions/v1beta1 May 6 22:07:47.132: INFO: Checking APIGroup: policy May 6 22:07:47.133: INFO: PreferredVersion.GroupVersion: policy/v1 May 6 22:07:47.133: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] May 6 22:07:47.133: INFO: policy/v1 matches policy/v1 May 6 22:07:47.133: INFO: Checking APIGroup: rbac.authorization.k8s.io May 6 22:07:47.134: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 May 6 22:07:47.134: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.134: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 May 6 22:07:47.134: INFO: Checking APIGroup: storage.k8s.io May 6 22:07:47.135: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 May 6 22:07:47.135: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.135: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 May 6 22:07:47.135: INFO: Checking APIGroup: admissionregistration.k8s.io May 6 22:07:47.136: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 May 6 22:07:47.136: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.136: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 May 6 22:07:47.136: INFO: Checking APIGroup: apiextensions.k8s.io May 6 22:07:47.137: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 May 6 22:07:47.137: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.137: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 May 6 22:07:47.137: INFO: Checking APIGroup: scheduling.k8s.io May 6 22:07:47.137: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 May 6 22:07:47.137: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.137: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 May 6 22:07:47.137: INFO: Checking APIGroup: coordination.k8s.io May 6 22:07:47.138: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 May 6 22:07:47.138: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.138: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 May 6 22:07:47.138: INFO: Checking APIGroup: node.k8s.io May 6 22:07:47.139: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 May 6 22:07:47.139: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.139: INFO: node.k8s.io/v1 matches node.k8s.io/v1 May 6 22:07:47.139: INFO: Checking APIGroup: discovery.k8s.io May 6 22:07:47.140: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 May 6 22:07:47.140: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.140: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 May 6 22:07:47.140: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io May 6 22:07:47.140: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 May 6 22:07:47.140: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.140: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 May 6 22:07:47.140: INFO: Checking APIGroup: intel.com May 6 22:07:47.141: INFO: PreferredVersion.GroupVersion: intel.com/v1 May 6 22:07:47.141: INFO: Versions found [{intel.com/v1 v1}] May 6 22:07:47.141: INFO: intel.com/v1 matches intel.com/v1 May 6 22:07:47.141: INFO: Checking APIGroup: k8s.cni.cncf.io May 6 22:07:47.142: INFO: PreferredVersion.GroupVersion: k8s.cni.cncf.io/v1 May 6 22:07:47.142: INFO: Versions found [{k8s.cni.cncf.io/v1 v1}] May 6 22:07:47.142: INFO: k8s.cni.cncf.io/v1 matches k8s.cni.cncf.io/v1 May 6 22:07:47.142: INFO: Checking APIGroup: monitoring.coreos.com May 6 22:07:47.143: INFO: PreferredVersion.GroupVersion: monitoring.coreos.com/v1 May 6 22:07:47.143: INFO: Versions found [{monitoring.coreos.com/v1 v1} {monitoring.coreos.com/v1alpha1 v1alpha1}] May 6 22:07:47.143: INFO: monitoring.coreos.com/v1 matches monitoring.coreos.com/v1 May 6 22:07:47.143: INFO: Checking APIGroup: telemetry.intel.com May 6 22:07:47.144: INFO: PreferredVersion.GroupVersion: telemetry.intel.com/v1alpha1 May 6 22:07:47.144: INFO: Versions found [{telemetry.intel.com/v1alpha1 v1alpha1}] May 6 22:07:47.144: INFO: telemetry.intel.com/v1alpha1 matches telemetry.intel.com/v1alpha1 May 6 22:07:47.144: INFO: Checking APIGroup: custom.metrics.k8s.io May 6 22:07:47.145: INFO: PreferredVersion.GroupVersion: custom.metrics.k8s.io/v1beta1 May 6 22:07:47.145: INFO: Versions found [{custom.metrics.k8s.io/v1beta1 v1beta1}] May 6 22:07:47.145: INFO: custom.metrics.k8s.io/v1beta1 matches custom.metrics.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:47.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "discovery-628" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":8,"skipped":137,"failed":0} S ------------------------------ [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:47.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename server-version STEP: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Request ServerVersion STEP: Confirm major version May 6 22:07:47.179: INFO: Major version: 1 STEP: Confirm minor version May 6 22:07:47.179: INFO: cleanMinorVersion: 21 May 6 22:07:47.179: INFO: Minor version: 21 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:47.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "server-version-7520" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":9,"skipped":138,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:23.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-5kzb STEP: Creating a pod to test atomic-volume-subpath May 6 22:07:23.829: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5kzb" in namespace "subpath-3402" to be "Succeeded or Failed" May 6 22:07:23.831: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Pending", Reason="", readiness=false. Elapsed: 1.893315ms May 6 22:07:25.835: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005152906s May 6 22:07:27.839: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 4.009146808s May 6 22:07:29.842: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 6.012888955s May 6 22:07:31.850: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 8.020724856s May 6 22:07:33.856: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 10.026226723s May 6 22:07:35.862: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 12.032348072s May 6 22:07:37.865: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 14.03572378s May 6 22:07:39.868: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 16.038954341s May 6 22:07:41.872: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 18.04216808s May 6 22:07:43.874: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 20.044784657s May 6 22:07:45.879: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Running", Reason="", readiness=true. Elapsed: 22.049135102s May 6 22:07:47.883: INFO: Pod "pod-subpath-test-configmap-5kzb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053164802s STEP: Saw pod success May 6 22:07:47.883: INFO: Pod "pod-subpath-test-configmap-5kzb" satisfied condition "Succeeded or Failed" May 6 22:07:47.885: INFO: Trying to get logs from node node2 pod pod-subpath-test-configmap-5kzb container test-container-subpath-configmap-5kzb: STEP: delete the pod May 6 22:07:47.898: INFO: Waiting for pod pod-subpath-test-configmap-5kzb to disappear May 6 22:07:47.900: INFO: Pod pod-subpath-test-configmap-5kzb no longer exists STEP: Deleting pod pod-subpath-test-configmap-5kzb May 6 22:07:47.900: INFO: Deleting pod "pod-subpath-test-configmap-5kzb" in namespace "subpath-3402" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:47.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3402" for this suite. • [SLOW TEST:24.123 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":26,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:47.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events STEP: get a list of Events with a label in the current namespace STEP: delete a list of events May 6 22:07:47.959: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:47.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-1997" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":3,"skipped":30,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe W0506 22:06:49.643300 34 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.643: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.645: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:49.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2423" for this suite. • [SLOW TEST:60.055 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:47.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:07:47.259: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e4e50126-e4e2-40c4-9e83-4c530bba8edd" in namespace "security-context-test-9374" to be "Succeeded or Failed" May 6 22:07:47.261: INFO: Pod "alpine-nnp-false-e4e50126-e4e2-40c4-9e83-4c530bba8edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229318ms May 6 22:07:49.264: INFO: Pod "alpine-nnp-false-e4e50126-e4e2-40c4-9e83-4c530bba8edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005624497s May 6 22:07:51.268: INFO: Pod "alpine-nnp-false-e4e50126-e4e2-40c4-9e83-4c530bba8edd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008927157s May 6 22:07:53.271: INFO: Pod "alpine-nnp-false-e4e50126-e4e2-40c4-9e83-4c530bba8edd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012574026s May 6 22:07:53.271: INFO: Pod "alpine-nnp-false-e4e50126-e4e2-40c4-9e83-4c530bba8edd" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:53.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9374" for this suite. • [SLOW TEST:6.160 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when creating containers with AllowPrivilegeEscalation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:296 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":150,"failed":0} S ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:41.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-vn2cm in namespace proxy-2212 I0506 22:07:41.133622 30 runners.go:190] Created replication controller with name: proxy-service-vn2cm, namespace: proxy-2212, replica count: 1 I0506 22:07:42.184833 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:07:43.186081 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:07:44.188105 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:07:45.189280 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 22:07:46.191131 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 22:07:47.191705 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 22:07:48.192883 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 22:07:49.194138 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 22:07:50.196175 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0506 22:07:51.197300 30 runners.go:190] proxy-service-vn2cm Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:07:51.199: INFO: setup took 10.075806078s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 6 22:07:51.203: INFO: (0) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 2.939707ms) May 6 22:07:51.203: INFO: (0) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.897292ms) May 6 22:07:51.203: INFO: (0) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 2.884642ms) May 6 22:07:51.203: INFO: (0) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.0976ms) May 6 22:07:51.203: INFO: (0) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.039072ms) May 6 22:07:51.203: INFO: (0) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 3.092785ms) May 6 22:07:51.206: INFO: (0) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 6.285638ms) May 6 22:07:51.206: INFO: (0) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 6.358029ms) May 6 22:07:51.206: INFO: (0) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 6.384456ms) May 6 22:07:51.206: INFO: (0) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 6.450642ms) May 6 22:07:51.206: INFO: (0) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 6.420582ms) May 6 22:07:51.210: INFO: (0) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test (200; 2.982733ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 3.089789ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.853626ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.877126ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.54316ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.399509ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 3.455894ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.385459ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 3.469952ms) May 6 22:07:51.214: INFO: (1) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.55666ms) May 6 22:07:51.215: INFO: (1) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.853078ms) May 6 22:07:51.215: INFO: (1) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 4.113904ms) May 6 22:07:51.215: INFO: (1) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 4.060396ms) May 6 22:07:51.215: INFO: (1) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 4.516758ms) May 6 22:07:51.215: INFO: (1) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 4.586318ms) May 6 22:07:51.218: INFO: (2) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 2.290467ms) May 6 22:07:51.218: INFO: (2) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 2.371393ms) May 6 22:07:51.218: INFO: (2) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.695407ms) May 6 22:07:51.218: INFO: (2) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.625979ms) May 6 22:07:51.218: INFO: (2) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 2.742333ms) May 6 22:07:51.219: INFO: (2) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 2.970346ms) May 6 22:07:51.219: INFO: (2) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.211428ms) May 6 22:07:51.219: INFO: (2) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test (200; 4.020565ms) May 6 22:07:51.220: INFO: (2) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 4.33477ms) May 6 22:07:51.220: INFO: (2) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 4.874971ms) May 6 22:07:51.220: INFO: (2) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 4.929219ms) May 6 22:07:51.220: INFO: (2) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 4.844543ms) May 6 22:07:51.220: INFO: (2) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 5.100866ms) May 6 22:07:51.222: INFO: (2) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 6.590021ms) May 6 22:07:51.225: INFO: (3) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: ... (200; 2.629147ms) May 6 22:07:51.225: INFO: (3) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.310327ms) May 6 22:07:51.225: INFO: (3) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 2.801251ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 3.068012ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 3.114405ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 3.011411ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.13613ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 3.085514ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.468375ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.293309ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.580727ms) May 6 22:07:51.226: INFO: (3) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.815703ms) May 6 22:07:51.227: INFO: (3) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 4.163215ms) May 6 22:07:51.229: INFO: (4) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.146748ms) May 6 22:07:51.230: INFO: (4) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 2.490178ms) May 6 22:07:51.230: INFO: (4) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 2.615373ms) May 6 22:07:51.230: INFO: (4) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 2.657879ms) May 6 22:07:51.230: INFO: (4) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test (200; 3.367036ms) May 6 22:07:51.230: INFO: (4) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.359839ms) May 6 22:07:51.230: INFO: (4) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.323874ms) May 6 22:07:51.230: INFO: (4) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 3.369813ms) May 6 22:07:51.230: INFO: (4) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 3.357517ms) May 6 22:07:51.231: INFO: (4) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.365785ms) May 6 22:07:51.231: INFO: (4) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.740551ms) May 6 22:07:51.231: INFO: (4) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 4.090715ms) May 6 22:07:51.231: INFO: (4) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 4.20641ms) May 6 22:07:51.231: INFO: (4) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 4.260882ms) May 6 22:07:51.231: INFO: (4) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 4.531573ms) May 6 22:07:51.234: INFO: (5) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.447157ms) May 6 22:07:51.234: INFO: (5) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.459819ms) May 6 22:07:51.234: INFO: (5) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test (200; 2.772545ms) May 6 22:07:51.234: INFO: (5) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 2.768002ms) May 6 22:07:51.235: INFO: (5) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.010254ms) May 6 22:07:51.235: INFO: (5) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 3.217732ms) May 6 22:07:51.235: INFO: (5) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.174449ms) May 6 22:07:51.235: INFO: (5) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.227983ms) May 6 22:07:51.235: INFO: (5) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.269654ms) May 6 22:07:51.235: INFO: (5) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.526379ms) May 6 22:07:51.236: INFO: (5) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 3.84125ms) May 6 22:07:51.236: INFO: (5) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 4.216057ms) May 6 22:07:51.236: INFO: (5) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 4.277486ms) May 6 22:07:51.238: INFO: (6) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.239882ms) May 6 22:07:51.238: INFO: (6) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 2.188533ms) May 6 22:07:51.239: INFO: (6) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.541999ms) May 6 22:07:51.239: INFO: (6) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.531891ms) May 6 22:07:51.239: INFO: (6) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: ... (200; 2.989406ms) May 6 22:07:51.239: INFO: (6) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 3.207799ms) May 6 22:07:51.239: INFO: (6) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 3.17569ms) May 6 22:07:51.240: INFO: (6) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.413018ms) May 6 22:07:51.240: INFO: (6) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.450531ms) May 6 22:07:51.240: INFO: (6) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 3.436419ms) May 6 22:07:51.240: INFO: (6) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.671421ms) May 6 22:07:51.240: INFO: (6) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 4.091606ms) May 6 22:07:51.241: INFO: (6) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 4.64657ms) May 6 22:07:51.243: INFO: (7) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test<... (200; 2.526458ms) May 6 22:07:51.244: INFO: (7) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.781214ms) May 6 22:07:51.244: INFO: (7) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.614999ms) May 6 22:07:51.244: INFO: (7) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.569721ms) May 6 22:07:51.244: INFO: (7) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 3.024234ms) May 6 22:07:51.244: INFO: (7) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 3.080447ms) May 6 22:07:51.244: INFO: (7) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.225248ms) May 6 22:07:51.245: INFO: (7) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.557499ms) May 6 22:07:51.245: INFO: (7) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.475798ms) May 6 22:07:51.245: INFO: (7) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.597467ms) May 6 22:07:51.245: INFO: (7) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 3.711242ms) May 6 22:07:51.245: INFO: (7) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 3.781075ms) May 6 22:07:51.245: INFO: (7) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.778176ms) May 6 22:07:51.245: INFO: (7) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.863957ms) May 6 22:07:51.245: INFO: (7) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 4.2056ms) May 6 22:07:51.248: INFO: (8) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.10847ms) May 6 22:07:51.248: INFO: (8) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test (200; 2.35277ms) May 6 22:07:51.248: INFO: (8) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 2.272652ms) May 6 22:07:51.248: INFO: (8) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.654701ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 3.161798ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.943178ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.101854ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 3.10021ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.222184ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 3.328106ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.754415ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.694991ms) May 6 22:07:51.249: INFO: (8) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.746935ms) May 6 22:07:51.250: INFO: (8) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.953699ms) May 6 22:07:51.251: INFO: (9) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 1.800269ms) May 6 22:07:51.252: INFO: (9) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: ... (200; 2.45225ms) May 6 22:07:51.252: INFO: (9) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 2.469724ms) May 6 22:07:51.253: INFO: (9) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 2.763726ms) May 6 22:07:51.253: INFO: (9) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.639941ms) May 6 22:07:51.253: INFO: (9) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 2.836787ms) May 6 22:07:51.253: INFO: (9) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 3.112023ms) May 6 22:07:51.253: INFO: (9) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.286489ms) May 6 22:07:51.253: INFO: (9) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 3.330002ms) May 6 22:07:51.253: INFO: (9) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.568367ms) May 6 22:07:51.254: INFO: (9) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.889835ms) May 6 22:07:51.254: INFO: (9) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 4.117825ms) May 6 22:07:51.254: INFO: (9) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 4.174894ms) May 6 22:07:51.254: INFO: (9) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 4.132972ms) May 6 22:07:51.256: INFO: (10) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 1.781726ms) May 6 22:07:51.256: INFO: (10) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 1.915192ms) May 6 22:07:51.256: INFO: (10) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.359915ms) May 6 22:07:51.256: INFO: (10) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test (200; 2.540504ms) May 6 22:07:51.257: INFO: (10) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 2.643997ms) May 6 22:07:51.257: INFO: (10) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.84131ms) May 6 22:07:51.257: INFO: (10) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.981491ms) May 6 22:07:51.257: INFO: (10) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.124176ms) May 6 22:07:51.257: INFO: (10) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.352764ms) May 6 22:07:51.258: INFO: (10) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 3.390735ms) May 6 22:07:51.258: INFO: (10) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.450028ms) May 6 22:07:51.258: INFO: (10) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.535118ms) May 6 22:07:51.258: INFO: (10) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.908178ms) May 6 22:07:51.258: INFO: (10) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.91975ms) May 6 22:07:51.258: INFO: (10) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 4.213533ms) May 6 22:07:51.261: INFO: (11) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.157411ms) May 6 22:07:51.261: INFO: (11) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.350523ms) May 6 22:07:51.261: INFO: (11) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 2.38636ms) May 6 22:07:51.261: INFO: (11) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 2.716353ms) May 6 22:07:51.261: INFO: (11) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 2.665995ms) May 6 22:07:51.261: INFO: (11) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: ... (200; 3.304663ms) May 6 22:07:51.262: INFO: (11) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.475311ms) May 6 22:07:51.262: INFO: (11) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.324435ms) May 6 22:07:51.262: INFO: (11) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.602073ms) May 6 22:07:51.263: INFO: (11) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.83595ms) May 6 22:07:51.263: INFO: (11) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 4.059116ms) May 6 22:07:51.263: INFO: (11) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 4.070674ms) May 6 22:07:51.263: INFO: (11) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 4.765359ms) May 6 22:07:51.265: INFO: (12) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 1.906966ms) May 6 22:07:51.266: INFO: (12) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.256877ms) May 6 22:07:51.266: INFO: (12) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.395536ms) May 6 22:07:51.266: INFO: (12) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.411455ms) May 6 22:07:51.266: INFO: (12) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 2.724219ms) May 6 22:07:51.266: INFO: (12) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.863546ms) May 6 22:07:51.266: INFO: (12) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 3.00313ms) May 6 22:07:51.266: INFO: (12) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: ... (200; 3.49762ms) May 6 22:07:51.267: INFO: (12) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.588957ms) May 6 22:07:51.267: INFO: (12) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 3.440707ms) May 6 22:07:51.267: INFO: (12) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 3.540909ms) May 6 22:07:51.267: INFO: (12) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.805486ms) May 6 22:07:51.267: INFO: (12) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 4.002996ms) May 6 22:07:51.267: INFO: (12) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.901496ms) May 6 22:07:51.268: INFO: (12) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 4.311596ms) May 6 22:07:51.270: INFO: (13) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 2.159613ms) May 6 22:07:51.270: INFO: (13) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 2.363995ms) May 6 22:07:51.271: INFO: (13) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.454753ms) May 6 22:07:51.271: INFO: (13) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.644905ms) May 6 22:07:51.271: INFO: (13) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 2.640675ms) May 6 22:07:51.271: INFO: (13) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test<... (200; 3.290617ms) May 6 22:07:51.271: INFO: (13) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.20267ms) May 6 22:07:51.272: INFO: (13) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.585419ms) May 6 22:07:51.272: INFO: (13) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.890507ms) May 6 22:07:51.272: INFO: (13) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.895151ms) May 6 22:07:51.272: INFO: (13) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 4.207773ms) May 6 22:07:51.275: INFO: (14) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.232688ms) May 6 22:07:51.275: INFO: (14) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.32477ms) May 6 22:07:51.275: INFO: (14) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.467985ms) May 6 22:07:51.275: INFO: (14) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.579456ms) May 6 22:07:51.275: INFO: (14) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: ... (200; 2.707954ms) May 6 22:07:51.275: INFO: (14) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.827396ms) May 6 22:07:51.275: INFO: (14) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 2.880422ms) May 6 22:07:51.275: INFO: (14) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 2.836056ms) May 6 22:07:51.277: INFO: (14) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 4.304763ms) May 6 22:07:51.277: INFO: (14) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 4.764377ms) May 6 22:07:51.277: INFO: (14) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 4.930749ms) May 6 22:07:51.278: INFO: (14) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 4.779924ms) May 6 22:07:51.278: INFO: (14) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 5.115547ms) May 6 22:07:51.278: INFO: (14) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 5.174205ms) May 6 22:07:51.278: INFO: (14) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 5.012008ms) May 6 22:07:51.281: INFO: (15) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test<... (200; 2.844041ms) May 6 22:07:51.281: INFO: (15) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 3.201133ms) May 6 22:07:51.281: INFO: (15) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.849741ms) May 6 22:07:51.281: INFO: (15) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.553524ms) May 6 22:07:51.281: INFO: (15) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.953805ms) May 6 22:07:51.282: INFO: (15) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 3.35149ms) May 6 22:07:51.282: INFO: (15) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 3.589751ms) May 6 22:07:51.282: INFO: (15) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 4.010869ms) May 6 22:07:51.282: INFO: (15) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.966706ms) May 6 22:07:51.282: INFO: (15) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 3.538701ms) May 6 22:07:51.282: INFO: (15) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 4.003517ms) May 6 22:07:51.284: INFO: (15) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 5.239081ms) May 6 22:07:51.284: INFO: (15) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 5.43713ms) May 6 22:07:51.284: INFO: (15) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 5.301645ms) May 6 22:07:51.286: INFO: (16) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test<... (200; 2.331817ms) May 6 22:07:51.287: INFO: (16) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.573951ms) May 6 22:07:51.287: INFO: (16) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.765414ms) May 6 22:07:51.287: INFO: (16) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 2.631274ms) May 6 22:07:51.287: INFO: (16) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.257415ms) May 6 22:07:51.287: INFO: (16) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 3.137792ms) May 6 22:07:51.287: INFO: (16) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.204342ms) May 6 22:07:51.287: INFO: (16) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 3.208634ms) May 6 22:07:51.288: INFO: (16) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.631286ms) May 6 22:07:51.288: INFO: (16) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.658771ms) May 6 22:07:51.288: INFO: (16) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.675202ms) May 6 22:07:51.288: INFO: (16) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.848667ms) May 6 22:07:51.288: INFO: (16) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 4.012906ms) May 6 22:07:51.290: INFO: (17) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 1.918354ms) May 6 22:07:51.290: INFO: (17) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.03734ms) May 6 22:07:51.291: INFO: (17) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test (200; 2.485641ms) May 6 22:07:51.291: INFO: (17) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.796292ms) May 6 22:07:51.291: INFO: (17) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.877201ms) May 6 22:07:51.291: INFO: (17) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 2.816389ms) May 6 22:07:51.291: INFO: (17) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 2.864029ms) May 6 22:07:51.292: INFO: (17) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 3.101057ms) May 6 22:07:51.292: INFO: (17) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.518295ms) May 6 22:07:51.292: INFO: (17) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.410402ms) May 6 22:07:51.292: INFO: (17) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.594769ms) May 6 22:07:51.292: INFO: (17) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.886592ms) May 6 22:07:51.293: INFO: (17) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 4.048886ms) May 6 22:07:51.294: INFO: (18) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 1.674124ms) May 6 22:07:51.295: INFO: (18) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 1.943015ms) May 6 22:07:51.295: INFO: (18) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.400472ms) May 6 22:07:51.295: INFO: (18) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: ... (200; 2.808755ms) May 6 22:07:51.296: INFO: (18) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 2.855195ms) May 6 22:07:51.296: INFO: (18) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.801547ms) May 6 22:07:51.296: INFO: (18) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.800679ms) May 6 22:07:51.296: INFO: (18) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.209789ms) May 6 22:07:51.296: INFO: (18) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:1080/proxy/: test<... (200; 3.280009ms) May 6 22:07:51.296: INFO: (18) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 3.354063ms) May 6 22:07:51.296: INFO: (18) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 3.527524ms) May 6 22:07:51.297: INFO: (18) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.770129ms) May 6 22:07:51.297: INFO: (18) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 4.047801ms) May 6 22:07:51.297: INFO: (18) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 4.032265ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.548543ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:443/proxy/: test<... (200; 2.750972ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:162/proxy/: bar (200; 2.628174ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:462/proxy/: tls qux (200; 2.709528ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb/proxy/: test (200; 2.674729ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:1080/proxy/: ... (200; 2.786852ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/https:proxy-service-vn2cm-7zrqb:460/proxy/: tls baz (200; 2.87576ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/http:proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 2.8861ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/pods/proxy-service-vn2cm-7zrqb:160/proxy/: foo (200; 3.282713ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname1/proxy/: tls baz (200; 3.39912ms) May 6 22:07:51.300: INFO: (19) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname2/proxy/: bar (200; 3.512752ms) May 6 22:07:51.301: INFO: (19) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname2/proxy/: bar (200; 3.573939ms) May 6 22:07:51.301: INFO: (19) /api/v1/namespaces/proxy-2212/services/proxy-service-vn2cm:portname1/proxy/: foo (200; 3.814328ms) May 6 22:07:51.301: INFO: (19) /api/v1/namespaces/proxy-2212/services/http:proxy-service-vn2cm:portname1/proxy/: foo (200; 3.84567ms) May 6 22:07:51.301: INFO: (19) /api/v1/namespaces/proxy-2212/services/https:proxy-service-vn2cm:tlsportname2/proxy/: tls qux (200; 4.345924ms) STEP: deleting ReplicationController proxy-service-vn2cm in namespace proxy-2212, will wait for the garbage collector to delete the pods May 6 22:07:51.359: INFO: Deleting ReplicationController proxy-service-vn2cm took: 4.985959ms May 6 22:07:51.460: INFO: Terminating ReplicationController proxy-service-vn2cm pods took: 101.202111ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:54.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2212" for this suite. • [SLOW TEST:13.767 seconds] [sig-network] Proxy /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74 should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":3,"skipped":40,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:49.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:07:49.732: INFO: Waiting up to 5m0s for pod "downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623" in namespace "projected-1246" to be "Succeeded or Failed" May 6 22:07:49.734: INFO: Pod "downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623": Phase="Pending", Reason="", readiness=false. Elapsed: 2.192998ms May 6 22:07:51.739: INFO: Pod "downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006568066s May 6 22:07:53.742: INFO: Pod "downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010108568s May 6 22:07:55.749: INFO: Pod "downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016899551s STEP: Saw pod success May 6 22:07:55.749: INFO: Pod "downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623" satisfied condition "Succeeded or Failed" May 6 22:07:55.751: INFO: Trying to get logs from node node1 pod downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623 container client-container: STEP: delete the pod May 6 22:07:55.765: INFO: Waiting for pod downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623 to disappear May 6 22:07:55.767: INFO: Pod downwardapi-volume-de7c338d-44c0-474f-873e-e747c46af623 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:55.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1246" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:44.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller May 6 22:07:44.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 create -f -' May 6 22:07:45.125: INFO: stderr: "" May 6 22:07:45.125: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 22:07:45.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:07:45.316: INFO: stderr: "" May 6 22:07:45.316: INFO: stdout: "update-demo-nautilus-4cl6j update-demo-nautilus-z65zh " May 6 22:07:45.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods update-demo-nautilus-4cl6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:07:45.496: INFO: stderr: "" May 6 22:07:45.496: INFO: stdout: "" May 6 22:07:45.496: INFO: update-demo-nautilus-4cl6j is created but not running May 6 22:07:50.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:07:50.673: INFO: stderr: "" May 6 22:07:50.673: INFO: stdout: "update-demo-nautilus-4cl6j update-demo-nautilus-z65zh " May 6 22:07:50.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods update-demo-nautilus-4cl6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:07:50.845: INFO: stderr: "" May 6 22:07:50.845: INFO: stdout: "" May 6 22:07:50.845: INFO: update-demo-nautilus-4cl6j is created but not running May 6 22:07:55.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:07:56.033: INFO: stderr: "" May 6 22:07:56.033: INFO: stdout: "update-demo-nautilus-4cl6j update-demo-nautilus-z65zh " May 6 22:07:56.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods update-demo-nautilus-4cl6j -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:07:56.215: INFO: stderr: "" May 6 22:07:56.215: INFO: stdout: "true" May 6 22:07:56.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods update-demo-nautilus-4cl6j -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 6 22:07:56.392: INFO: stderr: "" May 6 22:07:56.392: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 6 22:07:56.392: INFO: validating pod update-demo-nautilus-4cl6j May 6 22:07:56.397: INFO: got data: { "image": "nautilus.jpg" } May 6 22:07:56.397: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:07:56.397: INFO: update-demo-nautilus-4cl6j is verified up and running May 6 22:07:56.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods update-demo-nautilus-z65zh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:07:56.566: INFO: stderr: "" May 6 22:07:56.566: INFO: stdout: "true" May 6 22:07:56.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods update-demo-nautilus-z65zh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 6 22:07:56.735: INFO: stderr: "" May 6 22:07:56.735: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 6 22:07:56.735: INFO: validating pod update-demo-nautilus-z65zh May 6 22:07:56.738: INFO: got data: { "image": "nautilus.jpg" } May 6 22:07:56.739: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:07:56.739: INFO: update-demo-nautilus-z65zh is verified up and running STEP: using delete to clean up resources May 6 22:07:56.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 delete --grace-period=0 --force -f -' May 6 22:07:56.889: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:07:56.889: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 22:07:56.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get rc,svc -l name=update-demo --no-headers' May 6 22:07:57.093: INFO: stderr: "No resources found in kubectl-9132 namespace.\n" May 6 22:07:57.093: INFO: stdout: "" May 6 22:07:57.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9132 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 22:07:57.266: INFO: stderr: "" May 6 22:07:57.266: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:57.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9132" for this suite. • [SLOW TEST:12.547 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":8,"skipped":95,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:53.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-80a72571-5351-4e98-9dfd-cc97bb0ddb77 STEP: Creating a pod to test consume secrets May 6 22:07:53.420: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c" in namespace "projected-9403" to be "Succeeded or Failed" May 6 22:07:53.423: INFO: Pod "pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.664689ms May 6 22:07:55.428: INFO: Pod "pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00866404s May 6 22:07:57.433: INFO: Pod "pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c": Phase="Running", Reason="", readiness=true. Elapsed: 4.01292883s May 6 22:07:59.438: INFO: Pod "pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018492317s STEP: Saw pod success May 6 22:07:59.438: INFO: Pod "pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c" satisfied condition "Succeeded or Failed" May 6 22:07:59.441: INFO: Trying to get logs from node node2 pod pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c container projected-secret-volume-test: STEP: delete the pod May 6 22:07:59.457: INFO: Waiting for pod pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c to disappear May 6 22:07:59.459: INFO: Pod pod-projected-secrets-3e99a2b9-97b9-43b4-91c0-24361028d41c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:59.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9403" for this suite. • [SLOW TEST:6.088 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":151,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:59.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing all events in all namespaces STEP: patching the test event STEP: fetching the test event STEP: deleting the test event STEP: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:59.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7769" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":12,"skipped":200,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:55.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 6 22:07:55.899: INFO: Waiting up to 5m0s for pod "security-context-e11bc8ed-f3db-4ba9-96ab-6886997fca94" in namespace "security-context-4392" to be "Succeeded or Failed" May 6 22:07:55.902: INFO: Pod "security-context-e11bc8ed-f3db-4ba9-96ab-6886997fca94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.848179ms May 6 22:07:57.906: INFO: Pod "security-context-e11bc8ed-f3db-4ba9-96ab-6886997fca94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00756646s May 6 22:07:59.910: INFO: Pod "security-context-e11bc8ed-f3db-4ba9-96ab-6886997fca94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011688239s STEP: Saw pod success May 6 22:07:59.910: INFO: Pod "security-context-e11bc8ed-f3db-4ba9-96ab-6886997fca94" satisfied condition "Succeeded or Failed" May 6 22:07:59.913: INFO: Trying to get logs from node node1 pod security-context-e11bc8ed-f3db-4ba9-96ab-6886997fca94 container test-container: STEP: delete the pod May 6 22:07:59.927: INFO: Waiting for pod security-context-e11bc8ed-f3db-4ba9-96ab-6886997fca94 to disappear May 6 22:07:59.929: INFO: Pod security-context-e11bc8ed-f3db-4ba9-96ab-6886997fca94 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:07:59.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-4392" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":51,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:54.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 6 22:07:54.912: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 6 22:07:54.917: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 6 22:07:54.917: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 6 22:07:54.929: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 6 22:07:54.929: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 6 22:07:54.941: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 6 22:07:54.941: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 6 22:08:01.990: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:02.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-1796" for this suite. • [SLOW TEST:7.121 seconds] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:20.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: waiting for pod running STEP: creating a file in subpath May 6 22:07:25.022: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-6512 PodName:var-expansion-6d90aaf0-524c-45ad-93f0-eb2be139a103 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:07:25.022: INFO: >>> kubeConfig: /root/.kube/config STEP: test for file in mounted path May 6 22:07:25.187: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-6512 PodName:var-expansion-6d90aaf0-524c-45ad-93f0-eb2be139a103 ContainerName:dapi-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:07:25.187: INFO: >>> kubeConfig: /root/.kube/config STEP: updating the annotation value May 6 22:07:26.131: INFO: Successfully updated pod "var-expansion-6d90aaf0-524c-45ad-93f0-eb2be139a103" STEP: waiting for annotated pod running STEP: deleting the pod gracefully May 6 22:07:26.133: INFO: Deleting pod "var-expansion-6d90aaf0-524c-45ad-93f0-eb2be139a103" in namespace "var-expansion-6512" May 6 22:07:26.138: INFO: Wait up to 5m0s for pod "var-expansion-6d90aaf0-524c-45ad-93f0-eb2be139a103" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:04.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6512" for this suite. • [SLOW TEST:43.225 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":5,"skipped":105,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:44.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 6 22:07:44.783: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:46.786: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:48.788: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 6 22:07:48.804: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:50.809: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:52.809: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:54.808: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 22:07:54.819: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 22:07:54.822: INFO: Pod pod-with-poststart-http-hook still exists May 6 22:07:56.823: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 22:07:56.827: INFO: Pod pod-with-poststart-http-hook still exists May 6 22:07:58.823: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 22:07:58.827: INFO: Pod pod-with-poststart-http-hook still exists May 6 22:08:00.823: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 22:08:00.826: INFO: Pod pod-with-poststart-http-hook still exists May 6 22:08:02.824: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 22:08:02.826: INFO: Pod pod-with-poststart-http-hook still exists May 6 22:08:04.822: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 6 22:08:04.825: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:04.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1030" for this suite. • [SLOW TEST:20.087 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":96,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:57.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1514 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 6 22:07:57.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5957 run e2e-test-httpd-pod --restart=Never --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1' May 6 22:07:57.508: INFO: stderr: "" May 6 22:07:57.508: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1518 May 6 22:07:57.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5957 delete pods e2e-test-httpd-pod' May 6 22:08:06.693: INFO: stderr: "" May 6 22:08:06.693: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:06.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5957" for this suite. • [SLOW TEST:9.377 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511 should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":9,"skipped":116,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:02.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC May 6 22:08:02.084: INFO: namespace kubectl-9144 May 6 22:08:02.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9144 create -f -' May 6 22:08:02.425: INFO: stderr: "" May 6 22:08:02.425: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 6 22:08:03.429: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:03.429: INFO: Found 0 / 1 May 6 22:08:04.429: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:04.429: INFO: Found 0 / 1 May 6 22:08:05.429: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:05.429: INFO: Found 0 / 1 May 6 22:08:06.429: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:06.429: INFO: Found 1 / 1 May 6 22:08:06.429: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 22:08:06.432: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:06.432: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 22:08:06.432: INFO: wait on agnhost-primary startup in kubectl-9144 May 6 22:08:06.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9144 logs agnhost-primary-h5zb5 agnhost-primary' May 6 22:08:06.592: INFO: stderr: "" May 6 22:08:06.592: INFO: stdout: "Paused\n" STEP: exposing RC May 6 22:08:06.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9144 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' May 6 22:08:06.800: INFO: stderr: "" May 6 22:08:06.800: INFO: stdout: "service/rm2 exposed\n" May 6 22:08:06.802: INFO: Service rm2 in namespace kubectl-9144 found. STEP: exposing service May 6 22:08:08.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9144 expose service rm2 --name=rm3 --port=2345 --target-port=6379' May 6 22:08:09.018: INFO: stderr: "" May 6 22:08:09.018: INFO: stdout: "service/rm3 exposed\n" May 6 22:08:09.021: INFO: Service rm3 in namespace kubectl-9144 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:11.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9144" for this suite. • [SLOW TEST:8.973 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223 should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":5,"skipped":69,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:04.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes May 6 22:08:04.921: INFO: The status of Pod pod-update-bc065325-9dbe-493f-808a-dc84cd93de67 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:06.925: INFO: The status of Pod pod-update-bc065325-9dbe-493f-808a-dc84cd93de67 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:08.925: INFO: The status of Pod pod-update-bc065325-9dbe-493f-808a-dc84cd93de67 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:10.925: INFO: The status of Pod pod-update-bc065325-9dbe-493f-808a-dc84cd93de67 is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 22:08:11.439: INFO: Successfully updated pod "pod-update-bc065325-9dbe-493f-808a-dc84cd93de67" STEP: verifying the updated pod is in kubernetes May 6 22:08:11.444: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:11.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6779" for this suite. • [SLOW TEST:6.565 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":119,"failed":0} [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:11.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:08:11.490: INFO: The status of Pod busybox-readonly-fs5cacc9c8-dd28-4ca5-9496-280bba297ef3 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:13.492: INFO: The status of Pod busybox-readonly-fs5cacc9c8-dd28-4ca5-9496-280bba297ef3 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:15.492: INFO: The status of Pod busybox-readonly-fs5cacc9c8-dd28-4ca5-9496-280bba297ef3 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:15.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4206" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":119,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:15.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:15.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7593" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":9,"skipped":123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:11.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:08:11.448: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:08:13.456: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471691, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471691, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471691, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471691, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:08:16.469: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:16.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9469" for this suite. STEP: Destroying namespace "webhook-9469-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.400 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":6,"skipped":101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:48.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service nodeport-service with the type=NodePort in namespace services-6231 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6231 STEP: creating replication controller externalsvc in namespace services-6231 I0506 22:07:48.045830 27 runners.go:190] Created replication controller with name: externalsvc, namespace: services-6231, replica count: 2 I0506 22:07:51.097627 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:07:54.097864 27 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 6 22:07:54.112: INFO: Creating new exec pod May 6 22:08:00.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6231 exec execpod9qzrn -- /bin/sh -x -c nslookup nodeport-service.services-6231.svc.cluster.local' May 6 22:08:00.437: INFO: stderr: "+ nslookup nodeport-service.services-6231.svc.cluster.local\n" May 6 22:08:00.437: INFO: stdout: "Server:\t\t10.233.0.3\nAddress:\t10.233.0.3#53\n\nnodeport-service.services-6231.svc.cluster.local\tcanonical name = externalsvc.services-6231.svc.cluster.local.\nName:\texternalsvc.services-6231.svc.cluster.local\nAddress: 10.233.46.45\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6231, will wait for the garbage collector to delete the pods May 6 22:08:00.496: INFO: Deleting ReplicationController externalsvc took: 5.544004ms May 6 22:08:00.597: INFO: Terminating ReplicationController externalsvc pods took: 100.972533ms May 6 22:08:16.707: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:16.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6231" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:28.713 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":4,"skipped":36,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:15.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 22:08:15.659: INFO: Waiting up to 5m0s for pod "pod-53a29444-3451-4def-9fe3-95a2623f49ec" in namespace "emptydir-6091" to be "Succeeded or Failed" May 6 22:08:15.662: INFO: Pod "pod-53a29444-3451-4def-9fe3-95a2623f49ec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.264058ms May 6 22:08:17.666: INFO: Pod "pod-53a29444-3451-4def-9fe3-95a2623f49ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007537124s May 6 22:08:19.668: INFO: Pod "pod-53a29444-3451-4def-9fe3-95a2623f49ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009778983s STEP: Saw pod success May 6 22:08:19.668: INFO: Pod "pod-53a29444-3451-4def-9fe3-95a2623f49ec" satisfied condition "Succeeded or Failed" May 6 22:08:19.671: INFO: Trying to get logs from node node2 pod pod-53a29444-3451-4def-9fe3-95a2623f49ec container test-container: STEP: delete the pod May 6 22:08:19.684: INFO: Waiting for pod pod-53a29444-3451-4def-9fe3-95a2623f49ec to disappear May 6 22:08:19.686: INFO: Pod pod-53a29444-3451-4def-9fe3-95a2623f49ec no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:19.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6091" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":149,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:16.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-0892d1a5-8a76-46eb-9596-2ba37e0f3d06 STEP: Creating a pod to test consume configMaps May 6 22:08:16.766: INFO: Waiting up to 5m0s for pod "pod-configmaps-939cd3c4-a797-4105-8f66-5abe20bb3dbf" in namespace "configmap-31" to be "Succeeded or Failed" May 6 22:08:16.768: INFO: Pod "pod-configmaps-939cd3c4-a797-4105-8f66-5abe20bb3dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 1.939805ms May 6 22:08:18.772: INFO: Pod "pod-configmaps-939cd3c4-a797-4105-8f66-5abe20bb3dbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006412396s May 6 22:08:20.779: INFO: Pod "pod-configmaps-939cd3c4-a797-4105-8f66-5abe20bb3dbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013302469s STEP: Saw pod success May 6 22:08:20.779: INFO: Pod "pod-configmaps-939cd3c4-a797-4105-8f66-5abe20bb3dbf" satisfied condition "Succeeded or Failed" May 6 22:08:20.782: INFO: Trying to get logs from node node1 pod pod-configmaps-939cd3c4-a797-4105-8f66-5abe20bb3dbf container agnhost-container: STEP: delete the pod May 6 22:08:20.796: INFO: Waiting for pod pod-configmaps-939cd3c4-a797-4105-8f66-5abe20bb3dbf to disappear May 6 22:08:20.798: INFO: Pod pod-configmaps-939cd3c4-a797-4105-8f66-5abe20bb3dbf no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:20.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-31" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:16.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token STEP: reading a file in the container May 6 22:08:21.129: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1426 pod-service-account-d4501dcf-2dab-41ff-8df2-7adcc8aa7a24 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 6 22:08:21.389: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1426 pod-service-account-d4501dcf-2dab-41ff-8df2-7adcc8aa7a24 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 6 22:08:21.645: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1426 pod-service-account-d4501dcf-2dab-41ff-8df2-7adcc8aa7a24 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:22.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1426" for this suite. • [SLOW TEST:5.697 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":7,"skipped":137,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:19.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test env composition May 6 22:08:19.792: INFO: Waiting up to 5m0s for pod "var-expansion-daac99be-ca72-4c9b-9b34-30a96d27e1eb" in namespace "var-expansion-2462" to be "Succeeded or Failed" May 6 22:08:19.794: INFO: Pod "var-expansion-daac99be-ca72-4c9b-9b34-30a96d27e1eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451006ms May 6 22:08:21.798: INFO: Pod "var-expansion-daac99be-ca72-4c9b-9b34-30a96d27e1eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006484536s May 6 22:08:23.803: INFO: Pod "var-expansion-daac99be-ca72-4c9b-9b34-30a96d27e1eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011543013s STEP: Saw pod success May 6 22:08:23.803: INFO: Pod "var-expansion-daac99be-ca72-4c9b-9b34-30a96d27e1eb" satisfied condition "Succeeded or Failed" May 6 22:08:23.805: INFO: Trying to get logs from node node2 pod var-expansion-daac99be-ca72-4c9b-9b34-30a96d27e1eb container dapi-container: STEP: delete the pod May 6 22:08:23.939: INFO: Waiting for pod var-expansion-daac99be-ca72-4c9b-9b34-30a96d27e1eb to disappear May 6 22:08:23.941: INFO: Pod var-expansion-daac99be-ca72-4c9b-9b34-30a96d27e1eb no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:23.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2462" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":177,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:20.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:08:20.883: INFO: Creating pod... May 6 22:08:20.897: INFO: Pod Quantity: 1 Status: Pending May 6 22:08:21.900: INFO: Pod Quantity: 1 Status: Pending May 6 22:08:22.902: INFO: Pod Quantity: 1 Status: Pending May 6 22:08:23.901: INFO: Pod Status: Running May 6 22:08:23.901: INFO: Creating service... May 6 22:08:23.907: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/pods/agnhost/proxy/some/path/with/DELETE May 6 22:08:24.040: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE May 6 22:08:24.040: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/pods/agnhost/proxy/some/path/with/GET May 6 22:08:24.042: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET May 6 22:08:24.042: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/pods/agnhost/proxy/some/path/with/HEAD May 6 22:08:24.045: INFO: http.Client request:HEAD | StatusCode:200 May 6 22:08:24.045: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/pods/agnhost/proxy/some/path/with/OPTIONS May 6 22:08:24.048: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS May 6 22:08:24.048: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/pods/agnhost/proxy/some/path/with/PATCH May 6 22:08:24.050: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH May 6 22:08:24.050: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/pods/agnhost/proxy/some/path/with/POST May 6 22:08:24.052: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST May 6 22:08:24.052: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/pods/agnhost/proxy/some/path/with/PUT May 6 22:08:24.054: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT May 6 22:08:24.054: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/services/test-service/proxy/some/path/with/DELETE May 6 22:08:24.057: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE May 6 22:08:24.057: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/services/test-service/proxy/some/path/with/GET May 6 22:08:24.060: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET May 6 22:08:24.060: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/services/test-service/proxy/some/path/with/HEAD May 6 22:08:24.063: INFO: http.Client request:HEAD | StatusCode:200 May 6 22:08:24.063: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/services/test-service/proxy/some/path/with/OPTIONS May 6 22:08:24.066: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS May 6 22:08:24.066: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/services/test-service/proxy/some/path/with/PATCH May 6 22:08:24.068: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH May 6 22:08:24.069: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/services/test-service/proxy/some/path/with/POST May 6 22:08:24.071: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST May 6 22:08:24.071: INFO: Starting http.Client for https://10.10.190.202:6443/api/v1/namespaces/proxy-4326/services/test-service/proxy/some/path/with/PUT May 6 22:08:24.074: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:24.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4326" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":6,"skipped":61,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected W0506 22:06:49.926778 28 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.927: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.928: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-4fdae84d-abb7-4907-94ab-958d4b125785 STEP: Creating configMap with name cm-test-opt-upd-f103da62-a2fe-4b8e-9cfc-56318e8962dd STEP: Creating the pod May 6 22:06:49.956: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:06:51.961: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:06:53.961: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:06:55.961: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:06:57.960: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:06:59.960: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:01.964: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:03.960: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:05.960: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:07.960: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:09.959: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:11.960: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:13.962: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:15.961: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:17.959: INFO: The status of Pod pod-projected-configmaps-f49ff98a-b2dd-4c97-be7c-2c73aaa0b685 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-4fdae84d-abb7-4907-94ab-958d4b125785 STEP: Updating configmap cm-test-opt-upd-f103da62-a2fe-4b8e-9cfc-56318e8962dd STEP: Creating configMap with name cm-test-opt-create-a73e92f7-483b-4a4b-995c-cc4be3cfd091 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:25.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9798" for this suite. • [SLOW TEST:95.360 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":72,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:22.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on node default medium May 6 22:08:22.378: INFO: Waiting up to 5m0s for pod "pod-cee25828-a810-4a1a-a410-a1c9873a348b" in namespace "emptydir-3589" to be "Succeeded or Failed" May 6 22:08:22.380: INFO: Pod "pod-cee25828-a810-4a1a-a410-a1c9873a348b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.232286ms May 6 22:08:24.383: INFO: Pod "pod-cee25828-a810-4a1a-a410-a1c9873a348b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00514283s May 6 22:08:26.391: INFO: Pod "pod-cee25828-a810-4a1a-a410-a1c9873a348b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012736345s STEP: Saw pod success May 6 22:08:26.391: INFO: Pod "pod-cee25828-a810-4a1a-a410-a1c9873a348b" satisfied condition "Succeeded or Failed" May 6 22:08:26.393: INFO: Trying to get logs from node node1 pod pod-cee25828-a810-4a1a-a410-a1c9873a348b container test-container: STEP: delete the pod May 6 22:08:26.520: INFO: Waiting for pod pod-cee25828-a810-4a1a-a410-a1c9873a348b to disappear May 6 22:08:26.522: INFO: Pod pod-cee25828-a810-4a1a-a410-a1c9873a348b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:26.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3589" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":166,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:59.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostport STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled May 6 22:07:59.700: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:01.704: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:03.704: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:05.703: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:07.704: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:09.704: INFO: The status of Pod pod1 is Running (Ready = true) STEP: Trying to create another pod(pod2) with hostport 54323 but hostIP 10.10.190.208 on the node which pod1 resides and expect scheduled May 6 22:08:09.717: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:11.720: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:13.721: INFO: The status of Pod pod2 is Running (Ready = false) May 6 22:08:15.721: INFO: The status of Pod pod2 is Running (Ready = true) STEP: Trying to create a third pod(pod3) with hostport 54323, hostIP 10.10.190.208 but use UDP protocol on the node which pod2 resides May 6 22:08:15.734: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:17.739: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:19.738: INFO: The status of Pod pod3 is Running (Ready = true) May 6 22:08:19.750: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:21.753: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:23.753: INFO: The status of Pod e2e-host-exec is Running (Ready = true) STEP: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 May 6 22:08:23.755: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 10.10.190.208 http://127.0.0.1:54323/hostname] Namespace:hostport-9882 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:08:23.755: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 May 6 22:08:23.988: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://10.10.190.208:54323/hostname] Namespace:hostport-9882 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:08:23.988: INFO: >>> kubeConfig: /root/.kube/config STEP: checking connectivity from pod e2e-host-exec to serverIP: 10.10.190.208, port: 54323 UDP May 6 22:08:24.108: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 10.10.190.208 54323] Namespace:hostport-9882 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:08:24.108: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:29.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostport-9882" for this suite. • [SLOW TEST:29.553 seconds] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":216,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:23.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-f80134ad-1bee-496e-a870-094f9d4848a6 STEP: Creating a pod to test consume configMaps May 6 22:08:24.012: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e" in namespace "projected-904" to be "Succeeded or Failed" May 6 22:08:24.015: INFO: Pod "pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.092983ms May 6 22:08:26.020: INFO: Pod "pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007481798s May 6 22:08:28.024: INFO: Pod "pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011492622s May 6 22:08:30.028: INFO: Pod "pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015673861s STEP: Saw pod success May 6 22:08:30.028: INFO: Pod "pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e" satisfied condition "Succeeded or Failed" May 6 22:08:30.031: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e container agnhost-container: STEP: delete the pod May 6 22:08:30.046: INFO: Waiting for pod pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e to disappear May 6 22:08:30.048: INFO: Pod pod-projected-configmaps-74c6fcf5-c841-4084-be4a-4b08911fe98e no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:30.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-904" for this suite. • [SLOW TEST:6.085 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":185,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:26.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:08:26.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f8541e9-a757-411c-b0c1-50e30d7a0552" in namespace "downward-api-4402" to be "Succeeded or Failed" May 6 22:08:26.597: INFO: Pod "downwardapi-volume-0f8541e9-a757-411c-b0c1-50e30d7a0552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10746ms May 6 22:08:28.601: INFO: Pod "downwardapi-volume-0f8541e9-a757-411c-b0c1-50e30d7a0552": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005947399s May 6 22:08:30.605: INFO: Pod "downwardapi-volume-0f8541e9-a757-411c-b0c1-50e30d7a0552": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010391749s STEP: Saw pod success May 6 22:08:30.605: INFO: Pod "downwardapi-volume-0f8541e9-a757-411c-b0c1-50e30d7a0552" satisfied condition "Succeeded or Failed" May 6 22:08:30.607: INFO: Trying to get logs from node node1 pod downwardapi-volume-0f8541e9-a757-411c-b0c1-50e30d7a0552 container client-container: STEP: delete the pod May 6 22:08:30.620: INFO: Waiting for pod downwardapi-volume-0f8541e9-a757-411c-b0c1-50e30d7a0552 to disappear May 6 22:08:30.622: INFO: Pod downwardapi-volume-0f8541e9-a757-411c-b0c1-50e30d7a0552 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:30.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4402" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":179,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:29.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:08:29.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-680 create -f -' May 6 22:08:29.580: INFO: stderr: "" May 6 22:08:29.580: INFO: stdout: "replicationcontroller/agnhost-primary created\n" May 6 22:08:29.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-680 create -f -' May 6 22:08:29.949: INFO: stderr: "" May 6 22:08:29.949: INFO: stdout: "service/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 6 22:08:30.952: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:30.952: INFO: Found 0 / 1 May 6 22:08:31.954: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:31.954: INFO: Found 0 / 1 May 6 22:08:32.953: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:32.953: INFO: Found 0 / 1 May 6 22:08:33.955: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:33.955: INFO: Found 1 / 1 May 6 22:08:33.955: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 22:08:33.957: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:08:33.957: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 22:08:33.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-680 describe pod agnhost-primary-ddzl6' May 6 22:08:34.150: INFO: stderr: "" May 6 22:08:34.150: INFO: stdout: "Name: agnhost-primary-ddzl6\nNamespace: kubectl-680\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 06 May 2022 22:08:29 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.203\"\n ],\n \"mac\": \"0e:d8:30:92:b7:13\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.203\"\n ],\n \"mac\": \"0e:d8:30:92:b7:13\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: collectd\nStatus: Running\nIP: 10.244.3.203\nIPs:\n IP: 10.244.3.203\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: docker://7b937e9d48f2d4e6d1a6da4d1ef4e307b73412c183ed0b88174a86805eb5ad36\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 06 May 2022 22:08:32 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w8xtq (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-w8xtq:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-680/agnhost-primary-ddzl6 to node1\n Normal Pulling 3s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\"\n Normal Pulled 2s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/agnhost:2.32\" in 271.618846ms\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" May 6 22:08:34.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-680 describe rc agnhost-primary' May 6 22:08:34.353: INFO: stderr: "" May 6 22:08:34.353: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-680\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.32\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-primary-ddzl6\n" May 6 22:08:34.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-680 describe service agnhost-primary' May 6 22:08:34.549: INFO: stderr: "" May 6 22:08:34.549: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-680\nLabels: app=agnhost\n role=primary\nAnnotations: \nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.233.27.11\nIPs: 10.233.27.11\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.3.203:6379\nSession Affinity: None\nEvents: \n" May 6 22:08:34.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-680 describe node master1' May 6 22:08:34.758: INFO: stderr: "" May 6 22:08:34.758: INFO: stdout: "Name: master1\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=master1\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: flannel.alpha.coreos.com/backend-data: null\n flannel.alpha.coreos.com/backend-type: host-gw\n flannel.alpha.coreos.com/kube-subnet-manager: true\n flannel.alpha.coreos.com/public-ip: 10.10.190.202\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 06 May 2022 20:07:30 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: master1\n AcquireTime: \n RenewTime: Fri, 06 May 2022 22:08:31 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Fri, 06 May 2022 20:13:12 +0000 Fri, 06 May 2022 20:13:12 +0000 FlannelIsUp Flannel is running on this node\n MemoryPressure False Fri, 06 May 2022 22:08:31 +0000 Fri, 06 May 2022 20:07:27 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 06 May 2022 22:08:31 +0000 Fri, 06 May 2022 20:07:27 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 06 May 2022 22:08:31 +0000 Fri, 06 May 2022 20:07:27 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 06 May 2022 22:08:31 +0000 Fri, 06 May 2022 20:13:06 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 10.10.190.202\n Hostname: master1\nCapacity:\n cpu: 80\n ephemeral-storage: 440625980Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 196518300Ki\n pods: 110\nAllocatable:\n cpu: 79550m\n ephemeral-storage: 406080902496\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 195629468Ki\n pods: 110\nSystem Info:\n Machine ID: fddab730508c43d4ba9efb575f362bc6\n System UUID: 00ACFB60-0631-E711-906E-0017A4403562\n Boot ID: 8708efb4-3ff3-4f9b-a116-eb7702a71201\n Kernel Version: 3.10.0-1160.62.1.el7.x86_64\n OS Image: CentOS Linux 7 (Core)\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://20.10.15\n Kubelet Version: v1.21.1\n Kube-Proxy Version: v1.21.1\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system container-registry-65d7c44b96-5pp99 0 (0%) 0 (0%) 0 (0%) 0 (0%) 113m\n kube-system coredns-8474476ff8-jtj8t 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 117m\n kube-system kube-apiserver-master1 250m (0%) 0 (0%) 0 (0%) 0 (0%) 111m\n kube-system kube-controller-manager-master1 200m (0%) 0 (0%) 0 (0%) 0 (0%) 119m\n kube-system kube-flannel-dz2ld 150m (0%) 300m (0%) 64M (0%) 500M (0%) 118m\n kube-system kube-multus-ds-amd64-pdpj8 100m (0%) 100m (0%) 90Mi (0%) 90Mi (0%) 118m\n kube-system kube-proxy-bnqzh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 119m\n kube-system kube-scheduler-master1 100m (0%) 0 (0%) 0 (0%) 0 (0%) 102m\n monitoring node-exporter-6wcwp 112m (0%) 270m (0%) 200Mi (0%) 220Mi (0%) 105m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 1012m (1%) 670m (0%)\n memory 431140Ki (0%) 1003316480 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" May 6 22:08:34.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-680 describe namespace kubectl-680' May 6 22:08:34.933: INFO: stderr: "" May 6 22:08:34.933: INFO: stdout: "Name: kubectl-680\nLabels: e2e-framework=kubectl\n e2e-run=96936095-ed63-4dd4-b820-7b3f3049601a\n kubernetes.io/metadata.name=kubectl-680\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:34.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-680" for this suite. • [SLOW TEST:5.721 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1084 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:30.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 6 22:08:34.751: INFO: &Pod{ObjectMeta:{send-events-89e1f917-38bf-4e98-88f1-c24b7ebc1c15 events-2678 df84f83f-7cd8-4992-9ade-cf42e1ca9f39 34573 0 2022-05-06 22:08:30 +0000 UTC map[name:foo time:728649009] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.238" ], "mac": "b6:22:44:31:bb:f3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.238" ], "mac": "b6:22:44:31:bb:f3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-05-06 22:08:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:08:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:08:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.238\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bqx9l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bqx9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:08:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:08:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:08:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:08:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.238,StartTime:2022-05-06 22:08:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:08:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://82fbd6a7a7d58c1186c78b290466d8d99eecbd1f5ac131be5679a65d0b054d6e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 6 22:08:36.757: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 6 22:08:38.763: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:38.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2678" for this suite. • [SLOW TEST:8.071 seconds] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":14,"skipped":217,"failed":0} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:34.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's command May 6 22:08:34.977: INFO: Waiting up to 5m0s for pod "var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b" in namespace "var-expansion-3055" to be "Succeeded or Failed" May 6 22:08:34.980: INFO: Pod "var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.755078ms May 6 22:08:36.985: INFO: Pod "var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008542216s May 6 22:08:38.989: INFO: Pod "var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01187093s May 6 22:08:40.996: INFO: Pod "var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019425898s STEP: Saw pod success May 6 22:08:40.996: INFO: Pod "var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b" satisfied condition "Succeeded or Failed" May 6 22:08:40.998: INFO: Trying to get logs from node node1 pod var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b container dapi-container: STEP: delete the pod May 6 22:08:41.014: INFO: Waiting for pod var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b to disappear May 6 22:08:41.016: INFO: Pod var-expansion-c3474b95-4967-4b37-a534-7ae7497f117b no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:41.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3055" for this suite. • [SLOW TEST:6.078 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":217,"failed":0} SSSSSSSSS ------------------------------ {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":10,"skipped":215,"failed":0} [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:38.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test service account token: May 6 22:08:38.815: INFO: Waiting up to 5m0s for pod "test-pod-23b31ed7-b233-4f0e-8827-3c22306d97bb" in namespace "svcaccounts-111" to be "Succeeded or Failed" May 6 22:08:38.817: INFO: Pod "test-pod-23b31ed7-b233-4f0e-8827-3c22306d97bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219833ms May 6 22:08:40.821: INFO: Pod "test-pod-23b31ed7-b233-4f0e-8827-3c22306d97bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006613972s May 6 22:08:42.826: INFO: Pod "test-pod-23b31ed7-b233-4f0e-8827-3c22306d97bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011651921s STEP: Saw pod success May 6 22:08:42.826: INFO: Pod "test-pod-23b31ed7-b233-4f0e-8827-3c22306d97bb" satisfied condition "Succeeded or Failed" May 6 22:08:42.829: INFO: Trying to get logs from node node2 pod test-pod-23b31ed7-b233-4f0e-8827-3c22306d97bb container agnhost-container: STEP: delete the pod May 6 22:08:42.905: INFO: Waiting for pod test-pod-23b31ed7-b233-4f0e-8827-3c22306d97bb to disappear May 6 22:08:42.908: INFO: Pod test-pod-23b31ed7-b233-4f0e-8827-3c22306d97bb no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:42.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-111" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":11,"skipped":215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:24.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-2096 STEP: creating service affinity-clusterip in namespace services-2096 STEP: creating replication controller affinity-clusterip in namespace services-2096 I0506 22:08:24.281265 27 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2096, replica count: 3 I0506 22:08:27.332712 27 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:08:30.333395 27 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:08:30.339: INFO: Creating new exec pod May 6 22:08:35.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2096 exec execpod-affinity225ln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' May 6 22:08:35.715: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" May 6 22:08:35.715: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:08:35.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2096 exec execpod-affinity225ln -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.46.81 80' May 6 22:08:36.326: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.46.81 80\nConnection to 10.233.46.81 80 port [tcp/http] succeeded!\n" May 6 22:08:36.326: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:08:36.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2096 exec execpod-affinity225ln -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.46.81:80/ ; done' May 6 22:08:36.642: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.46.81:80/\n" May 6 22:08:36.642: INFO: stdout: "\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs\naffinity-clusterip-p6bqs" May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Received response from host: affinity-clusterip-p6bqs May 6 22:08:36.642: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip in namespace services-2096, will wait for the garbage collector to delete the pods May 6 22:08:36.706: INFO: Deleting ReplicationController affinity-clusterip took: 3.5674ms May 6 22:08:36.807: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.971381ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:46.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2096" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:22.477 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":145,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:42.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:08:43.023: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f302d27-d5d7-44fa-8779-bbe1eec72481" in namespace "downward-api-8521" to be "Succeeded or Failed" May 6 22:08:43.026: INFO: Pod "downwardapi-volume-2f302d27-d5d7-44fa-8779-bbe1eec72481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378267ms May 6 22:08:45.031: INFO: Pod "downwardapi-volume-2f302d27-d5d7-44fa-8779-bbe1eec72481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007812791s May 6 22:08:47.036: INFO: Pod "downwardapi-volume-2f302d27-d5d7-44fa-8779-bbe1eec72481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01248968s STEP: Saw pod success May 6 22:08:47.036: INFO: Pod "downwardapi-volume-2f302d27-d5d7-44fa-8779-bbe1eec72481" satisfied condition "Succeeded or Failed" May 6 22:08:47.039: INFO: Trying to get logs from node node2 pod downwardapi-volume-2f302d27-d5d7-44fa-8779-bbe1eec72481 container client-container: STEP: delete the pod May 6 22:08:47.052: INFO: Waiting for pod downwardapi-volume-2f302d27-d5d7-44fa-8779-bbe1eec72481 to disappear May 6 22:08:47.054: INFO: Pod downwardapi-volume-2f302d27-d5d7-44fa-8779-bbe1eec72481 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:47.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8521" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":247,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:41.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:08:41.079: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82" in namespace "projected-3069" to be "Succeeded or Failed" May 6 22:08:41.082: INFO: Pod "downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.80017ms May 6 22:08:43.085: INFO: Pod "downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006355372s May 6 22:08:45.088: INFO: Pod "downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009450946s May 6 22:08:47.092: INFO: Pod "downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012765264s STEP: Saw pod success May 6 22:08:47.092: INFO: Pod "downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82" satisfied condition "Succeeded or Failed" May 6 22:08:47.094: INFO: Trying to get logs from node node2 pod downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82 container client-container: STEP: delete the pod May 6 22:08:47.106: INFO: Waiting for pod downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82 to disappear May 6 22:08:47.108: INFO: Pod downwardapi-volume-b95a96fe-b7d6-48ae-9887-436727b43a82 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:47.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3069" for this suite. • [SLOW TEST:6.070 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ SS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":226,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:47.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 6 22:08:47.179: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-468 d7b6f436-29c7-43a8-a2ae-9b8f39806e66 34942 0 2022-05-06 22:08:47 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-05-06 22:08:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:08:47.180: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-468 d7b6f436-29c7-43a8-a2ae-9b8f39806e66 34944 0 2022-05-06 22:08:47 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-05-06 22:08:47 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:47.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-468" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":17,"skipped":237,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:07:14.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with configMap that has name projected-configmap-test-upd-623704d4-9e2f-4a37-b127-d319880d6be7 STEP: Creating the pod May 6 22:07:14.671: INFO: The status of Pod pod-projected-configmaps-69e61bfa-bcce-4408-a59a-63febaf59266 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:16.675: INFO: The status of Pod pod-projected-configmaps-69e61bfa-bcce-4408-a59a-63febaf59266 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:18.676: INFO: The status of Pod pod-projected-configmaps-69e61bfa-bcce-4408-a59a-63febaf59266 is Pending, waiting for it to be Running (with Ready = true) May 6 22:07:20.674: INFO: The status of Pod pod-projected-configmaps-69e61bfa-bcce-4408-a59a-63febaf59266 is Running (Ready = true) STEP: Updating configmap projected-configmap-test-upd-623704d4-9e2f-4a37-b127-d319880d6be7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:51.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3637" for this suite. • [SLOW TEST:96.624 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:47.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:08:47.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267" in namespace "downward-api-3930" to be "Succeeded or Failed" May 6 22:08:47.180: INFO: Pod "downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267": Phase="Pending", Reason="", readiness=false. Elapsed: 1.827555ms May 6 22:08:49.184: INFO: Pod "downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005969863s May 6 22:08:51.189: INFO: Pod "downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010964389s May 6 22:08:53.193: INFO: Pod "downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014912839s STEP: Saw pod success May 6 22:08:53.193: INFO: Pod "downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267" satisfied condition "Succeeded or Failed" May 6 22:08:53.195: INFO: Trying to get logs from node node2 pod downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267 container client-container: STEP: delete the pod May 6 22:08:53.236: INFO: Waiting for pod downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267 to disappear May 6 22:08:53.238: INFO: Pod downwardapi-volume-c0ec1a7f-cbfa-4846-ac1d-fc535965e267 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:53.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3930" for this suite. • [SLOW TEST:6.098 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:47.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 22:08:53.287: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:53.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4031" for this suite. • [SLOW TEST:6.074 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":255,"failed":0} S ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:25.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:08:25.290: INFO: created pod May 6 22:08:25.290: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6023" to be "Succeeded or Failed" May 6 22:08:25.292: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 1.963209ms May 6 22:08:27.296: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006147635s May 6 22:08:29.300: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009797498s STEP: Saw pod success May 6 22:08:29.300: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" May 6 22:08:59.304: INFO: polling logs May 6 22:08:59.311: INFO: Pod logs: 2022/05/06 22:08:28 OK: Got token 2022/05/06 22:08:28 validating with in-cluster discovery 2022/05/06 22:08:28 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/05/06 22:08:28 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6023:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1651875505, NotBefore:1651874905, IssuedAt:1651874905, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6023", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"2d256ffa-3bbb-47f6-a8c0-005a43d14c6d"}}} 2022/05/06 22:08:28 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/05/06 22:08:28 OK: Validated signature on JWT 2022/05/06 22:08:28 OK: Got valid claims from token! 2022/05/06 22:08:28 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6023:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1651875505, NotBefore:1651874905, IssuedAt:1651874905, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6023", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"2d256ffa-3bbb-47f6-a8c0-005a43d14c6d"}}} May 6 22:08:59.311: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:59.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6023" for this suite. • [SLOW TEST:34.146 seconds] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":2,"skipped":89,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:53.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: Gathering metrics May 6 22:08:59.384: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 6 22:08:59.534: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:08:59.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-841" for this suite. • [SLOW TEST:6.232 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":19,"skipped":256,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:51.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1974.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1974.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:09:03.425: INFO: DNS probes using dns-1974/dns-test-eabc0741-f014-40df-ab63-a80760b4fdbf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:03.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1974" for this suite. • [SLOW TEST:12.126 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:06.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 6 22:08:06.728: INFO: PodSpec: initContainers in spec.initContainers May 6 22:09:04.344: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-f107e145-b686-4e83-b33e-67a344a9bfc1", GenerateName:"", Namespace:"init-container-3122", SelfLink:"", UID:"8b8282c7-54a6-4816-94fa-aa0940eed841", ResourceVersion:"35401", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63787471686, loc:(*time.Location)(0x9e2e180)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"728397170"}, Annotations:map[string]string{"k8s.v1.cni.cncf.io/network-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.194\"\n ],\n \"mac\": \"8a:af:db:47:fb:99\",\n \"default\": true,\n \"dns\": {}\n}]", "k8s.v1.cni.cncf.io/networks-status":"[{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.194\"\n ],\n \"mac\": \"8a:af:db:47:fb:99\",\n \"default\": true,\n \"dns\": {}\n}]", "kubernetes.io/psp":"collectd"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ddac48), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ddac60)}, v1.ManagedFieldsEntry{Manager:"multus", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ddac78), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ddac90)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc002ddaca8), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002ddacc0)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-zcrc6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc001420f20), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-zcrc6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-zcrc6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-zcrc6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0017e4448), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"node1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00182dce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0017e44d0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0017e44f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0017e44f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0017e44fc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc000e852d0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471686, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471686, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471686, loc:(*time.Location)(0x9e2e180)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471686, loc:(*time.Location)(0x9e2e180)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.10.190.207", PodIP:"10.244.3.194", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.3.194"}}, StartTime:(*v1.Time)(0xc002ddacf0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00182ddc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00182de30)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"docker-pullable://k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"docker://171537dc011614a4316a0fd21b64d279966d92f2db438058962bfa6985e88a29", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001421180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001421100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0017e457f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:04.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3122" for this suite. • [SLOW TEST:57.646 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":-1,"completed":10,"skipped":117,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:03.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create deployment with httpd image May 6 22:09:03.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7688 create -f -' May 6 22:09:03.892: INFO: stderr: "" May 6 22:09:03.892: INFO: stdout: "deployment.apps/httpd-deployment created\n" STEP: verify diff finds difference between live and declared image May 6 22:09:03.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7688 diff -f -' May 6 22:09:04.279: INFO: rc: 1 May 6 22:09:04.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7688 delete -f -' May 6 22:09:04.409: INFO: stderr: "" May 6 22:09:04.409: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:04.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7688" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":4,"skipped":54,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:46.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-622 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-622 I0506 22:08:46.828829 27 runners.go:190] Created replication controller with name: externalname-service, namespace: services-622, replica count: 2 I0506 22:08:49.879623 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:08:52.879916 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:08:52.879: INFO: Creating new exec pod May 6 22:09:03.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-622 exec execpodph6nw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 6 22:09:04.619: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 6 22:09:04.619: INFO: stdout: "" May 6 22:09:05.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-622 exec execpodph6nw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 6 22:09:05.886: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 6 22:09:05.886: INFO: stdout: "externalname-service-2gcf6" May 6 22:09:05.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-622 exec execpodph6nw -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.24.75 80' May 6 22:09:06.170: INFO: stderr: "+ nc -v -t -w 2 10.233.24.75 80\nConnection to 10.233.24.75 80 port [tcp/http] succeeded!\n+ echo hostName\n" May 6 22:09:06.170: INFO: stdout: "externalname-service-mtmfj" May 6 22:09:06.170: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-622" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:19.398 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":8,"skipped":168,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:59.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir volume type on tmpfs May 6 22:08:59.372: INFO: Waiting up to 5m0s for pod "pod-2e9d81f3-53fe-4b01-8481-596d86357308" in namespace "emptydir-548" to be "Succeeded or Failed" May 6 22:08:59.377: INFO: Pod "pod-2e9d81f3-53fe-4b01-8481-596d86357308": Phase="Pending", Reason="", readiness=false. Elapsed: 4.944166ms May 6 22:09:01.381: INFO: Pod "pod-2e9d81f3-53fe-4b01-8481-596d86357308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00851377s May 6 22:09:03.385: INFO: Pod "pod-2e9d81f3-53fe-4b01-8481-596d86357308": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012948373s May 6 22:09:05.389: INFO: Pod "pod-2e9d81f3-53fe-4b01-8481-596d86357308": Phase="Pending", Reason="", readiness=false. Elapsed: 6.016857036s May 6 22:09:07.394: INFO: Pod "pod-2e9d81f3-53fe-4b01-8481-596d86357308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.021534773s STEP: Saw pod success May 6 22:09:07.394: INFO: Pod "pod-2e9d81f3-53fe-4b01-8481-596d86357308" satisfied condition "Succeeded or Failed" May 6 22:09:07.396: INFO: Trying to get logs from node node1 pod pod-2e9d81f3-53fe-4b01-8481-596d86357308 container test-container: STEP: delete the pod May 6 22:09:07.410: INFO: Waiting for pod pod-2e9d81f3-53fe-4b01-8481-596d86357308 to disappear May 6 22:09:07.412: INFO: Pod pod-2e9d81f3-53fe-4b01-8481-596d86357308 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:07.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-548" for this suite. • [SLOW TEST:8.085 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":92,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:04.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:09:04.392: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 6 22:09:06.420: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:07.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8029" for this suite. •S ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":11,"skipped":125,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:07.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:09:07.508: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501" in namespace "projected-8596" to be "Succeeded or Failed" May 6 22:09:07.510: INFO: Pod "downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501": Phase="Pending", Reason="", readiness=false. Elapsed: 1.871046ms May 6 22:09:09.513: INFO: Pod "downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005112181s May 6 22:09:11.520: INFO: Pod "downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011260252s May 6 22:09:13.523: INFO: Pod "downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015004905s STEP: Saw pod success May 6 22:09:13.523: INFO: Pod "downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501" satisfied condition "Succeeded or Failed" May 6 22:09:13.525: INFO: Trying to get logs from node node2 pod downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501 container client-container: STEP: delete the pod May 6 22:09:13.541: INFO: Waiting for pod downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501 to disappear May 6 22:09:13.543: INFO: Pod downwardapi-volume-2d1fc55d-1a8d-41b1-af4b-d061668c9501 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:13.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8596" for this suite. • [SLOW TEST:6.078 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:07.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1308 STEP: creating the pod May 6 22:09:07.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6952 create -f -' May 6 22:09:07.846: INFO: stderr: "" May 6 22:09:07.846: INFO: stdout: "pod/pause created\n" May 6 22:09:07.846: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 6 22:09:07.846: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6952" to be "running and ready" May 6 22:09:07.848: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024627ms May 6 22:09:09.851: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005718933s May 6 22:09:11.855: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008748281s May 6 22:09:13.858: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.011906693s May 6 22:09:13.858: INFO: Pod "pause" satisfied condition "running and ready" May 6 22:09:13.858: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: adding the label testing-label with value testing-label-value to a pod May 6 22:09:13.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6952 label pods pause testing-label=testing-label-value' May 6 22:09:14.053: INFO: stderr: "" May 6 22:09:14.053: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 6 22:09:14.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6952 get pod pause -L testing-label' May 6 22:09:14.215: INFO: stderr: "" May 6 22:09:14.215: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod May 6 22:09:14.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6952 label pods pause testing-label-' May 6 22:09:14.403: INFO: stderr: "" May 6 22:09:14.403: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 6 22:09:14.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6952 get pod pause -L testing-label' May 6 22:09:14.586: INFO: stderr: "" May 6 22:09:14.586: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 STEP: using delete to clean up resources May 6 22:09:14.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6952 delete --grace-period=0 --force -f -' May 6 22:09:14.733: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:09:14.733: INFO: stdout: "pod \"pause\" force deleted\n" May 6 22:09:14.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6952 get rc,svc -l name=pause --no-headers' May 6 22:09:14.955: INFO: stderr: "No resources found in kubectl-6952 namespace.\n" May 6 22:09:14.955: INFO: stdout: "" May 6 22:09:14.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6952 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 22:09:15.128: INFO: stderr: "" May 6 22:09:15.128: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:15.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6952" for this suite. • [SLOW TEST:7.693 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1306 should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":12,"skipped":128,"failed":0} SSSSSS ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":288,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:53.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:08:53.973: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:08:55.985: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:08:57.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:08:59.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:09:01.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471733, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:09:04.997: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:15.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6519" for this suite. STEP: Destroying namespace "webhook-6519-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:21.903 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ S ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":14,"skipped":288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:15.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name secret-emptykey-test-cfd04de0-f267-4bd1-9594-69fc165c75e4 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:15.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9731" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":13,"skipped":214,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:04.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service endpoint-test2 in namespace services-3839 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3839 to expose endpoints map[] May 6 22:09:04.506: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found May 6 22:09:05.513: INFO: successfully validated that service endpoint-test2 in namespace services-3839 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-3839 May 6 22:09:05.528: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:07.532: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:09.531: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:11.533: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:13.532: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3839 to expose endpoints map[pod1:[80]] May 6 22:09:13.542: INFO: successfully validated that service endpoint-test2 in namespace services-3839 exposes endpoints map[pod1:[80]] STEP: Creating pod pod2 in namespace services-3839 May 6 22:09:13.556: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:15.559: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:17.561: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3839 to expose endpoints map[pod1:[80] pod2:[80]] May 6 22:09:17.575: INFO: successfully validated that service endpoint-test2 in namespace services-3839 exposes endpoints map[pod1:[80] pod2:[80]] STEP: Deleting pod pod1 in namespace services-3839 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3839 to expose endpoints map[pod2:[80]] May 6 22:09:17.593: INFO: successfully validated that service endpoint-test2 in namespace services-3839 exposes endpoints map[pod2:[80]] STEP: Deleting pod pod2 in namespace services-3839 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3839 to expose endpoints map[] May 6 22:09:17.605: INFO: successfully validated that service endpoint-test2 in namespace services-3839 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:17.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3839" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:13.141 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":5,"skipped":85,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:30.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9671 May 6 22:08:30.119: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:32.123: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 6 22:08:34.123: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 6 22:08:34.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9671 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 6 22:08:34.388: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 6 22:08:34.388: INFO: stdout: "iptables" May 6 22:08:34.389: INFO: proxyMode: iptables May 6 22:08:34.396: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 22:08:34.398: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-clusterip-timeout in namespace services-9671 STEP: creating replication controller affinity-clusterip-timeout in namespace services-9671 I0506 22:08:34.407521 32 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9671, replica count: 3 I0506 22:08:37.459375 32 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:08:40.460686 32 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:08:40.464: INFO: Creating new exec pod May 6 22:08:45.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9671 exec execpod-affinityt5vk2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' May 6 22:08:45.764: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" May 6 22:08:45.764: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:08:45.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9671 exec execpod-affinityt5vk2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.16.71 80' May 6 22:08:46.029: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.16.71 80\nConnection to 10.233.16.71 80 port [tcp/http] succeeded!\n" May 6 22:08:46.029: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:08:46.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9671 exec execpod-affinityt5vk2 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.16.71:80/ ; done' May 6 22:08:46.534: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n" May 6 22:08:46.534: INFO: stdout: "\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2\naffinity-clusterip-timeout-v9vg2" May 6 22:08:46.534: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.534: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Received response from host: affinity-clusterip-timeout-v9vg2 May 6 22:08:46.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9671 exec execpod-affinityt5vk2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.16.71:80/' May 6 22:08:46.837: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n" May 6 22:08:46.837: INFO: stdout: "affinity-clusterip-timeout-v9vg2" May 6 22:09:06.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9671 exec execpod-affinityt5vk2 -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.233.16.71:80/' May 6 22:09:07.148: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.233.16.71:80/\n" May 6 22:09:07.148: INFO: stdout: "affinity-clusterip-timeout-tmbtg" May 6 22:09:07.148: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-timeout in namespace services-9671, will wait for the garbage collector to delete the pods May 6 22:09:07.211: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 3.735661ms May 6 22:09:07.312: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 101.269817ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:18.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9671" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:48.449 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":191,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:59.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 6 22:09:00.040: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:09:00.053: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:09:02.063: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:09:04.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:09:06.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471740, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:09:09.074: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 6 22:09:20.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=webhook-3302 attach --namespace=webhook-3302 to-be-attached-pod -i -c=container1' May 6 22:09:20.296: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:20.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3302" for this suite. STEP: Destroying namespace "webhook-3302-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.741 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":20,"skipped":279,"failed":0} SSSSS ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":114,"failed":0} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:13.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-634600c4-f8ec-4dad-919f-e8c1d5af008f STEP: Creating a pod to test consume secrets May 6 22:09:13.591: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2" in namespace "projected-5783" to be "Succeeded or Failed" May 6 22:09:13.594: INFO: Pod "pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.992477ms May 6 22:09:15.597: INFO: Pod "pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006159388s May 6 22:09:17.601: INFO: Pod "pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010308748s May 6 22:09:19.606: INFO: Pod "pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014964401s May 6 22:09:21.611: INFO: Pod "pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.020556692s STEP: Saw pod success May 6 22:09:21.611: INFO: Pod "pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2" satisfied condition "Succeeded or Failed" May 6 22:09:21.614: INFO: Trying to get logs from node node1 pod pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2 container projected-secret-volume-test: STEP: delete the pod May 6 22:09:21.636: INFO: Waiting for pod pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2 to disappear May 6 22:09:21.638: INFO: Pod pod-projected-secrets-392d232e-eeda-490d-818a-1724539e86d2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:21.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5783" for this suite. • [SLOW TEST:8.092 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":114,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:17.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 22:09:17.679: INFO: Waiting up to 5m0s for pod "pod-a7e192ae-0e8a-4512-9fc9-d5f304d065ee" in namespace "emptydir-6352" to be "Succeeded or Failed" May 6 22:09:17.682: INFO: Pod "pod-a7e192ae-0e8a-4512-9fc9-d5f304d065ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.09419ms May 6 22:09:19.685: INFO: Pod "pod-a7e192ae-0e8a-4512-9fc9-d5f304d065ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006577311s May 6 22:09:21.689: INFO: Pod "pod-a7e192ae-0e8a-4512-9fc9-d5f304d065ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010126423s STEP: Saw pod success May 6 22:09:21.689: INFO: Pod "pod-a7e192ae-0e8a-4512-9fc9-d5f304d065ee" satisfied condition "Succeeded or Failed" May 6 22:09:21.691: INFO: Trying to get logs from node node2 pod pod-a7e192ae-0e8a-4512-9fc9-d5f304d065ee container test-container: STEP: delete the pod May 6 22:09:21.814: INFO: Waiting for pod pod-a7e192ae-0e8a-4512-9fc9-d5f304d065ee to disappear May 6 22:09:21.816: INFO: Pod pod-a7e192ae-0e8a-4512-9fc9-d5f304d065ee no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:21.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6352" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":96,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:18.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-d52336a1-675b-48dc-8edf-e61a350e8f90 STEP: Creating a pod to test consume configMaps May 6 22:09:18.593: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2" in namespace "configmap-5791" to be "Succeeded or Failed" May 6 22:09:18.595: INFO: Pod "pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2": Phase="Pending", Reason="", readiness=false. Elapsed: 1.883011ms May 6 22:09:20.598: INFO: Pod "pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005001689s May 6 22:09:22.602: INFO: Pod "pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009360473s May 6 22:09:24.606: INFO: Pod "pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012970628s STEP: Saw pod success May 6 22:09:24.606: INFO: Pod "pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2" satisfied condition "Succeeded or Failed" May 6 22:09:24.608: INFO: Trying to get logs from node node2 pod pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2 container agnhost-container: STEP: delete the pod May 6 22:09:24.620: INFO: Waiting for pod pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2 to disappear May 6 22:09:24.622: INFO: Pod pod-configmaps-4ef232d2-82ee-4271-951c-6fa8773d00d2 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:24.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5791" for this suite. • [SLOW TEST:6.089 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":194,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:15.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:09:15.697: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:09:17.706: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:09:19.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:09:21.717: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471755, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:09:24.715: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:25.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8586" for this suite. STEP: Destroying namespace "webhook-8586-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.451 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":14,"skipped":246,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:06.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating server pod server in namespace prestop-2344 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-2344 STEP: Deleting pre-stop pod May 6 22:09:27.267: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:27.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-2344" for this suite. • [SLOW TEST:21.086 seconds] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23 should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":9,"skipped":172,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:21.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:27.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-8289" for this suite. • [SLOW TEST:6.067 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":7,"skipped":115,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:24.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 6 22:09:24.676: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4885 762a3732-20c2-4cd4-a100-59abd6f0f5bd 36188 0 2022-05-06 22:09:24 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-06 22:09:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:09:24.676: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4885 762a3732-20c2-4cd4-a100-59abd6f0f5bd 36189 0 2022-05-06 22:09:24 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-06 22:09:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:09:24.677: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4885 762a3732-20c2-4cd4-a100-59abd6f0f5bd 36190 0 2022-05-06 22:09:24 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-06 22:09:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 6 22:09:34.703: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4885 762a3732-20c2-4cd4-a100-59abd6f0f5bd 36601 0 2022-05-06 22:09:24 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-06 22:09:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:09:34.703: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4885 762a3732-20c2-4cd4-a100-59abd6f0f5bd 36602 0 2022-05-06 22:09:24 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-06 22:09:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:09:34.703: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-4885 762a3732-20c2-4cd4-a100-59abd6f0f5bd 36603 0 2022-05-06 22:09:24 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-05-06 22:09:24 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:34.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4885" for this suite. • [SLOW TEST:10.073 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":15,"skipped":197,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:20.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 6 22:09:20.380: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:22.385: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:24.384: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:26.385: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 6 22:09:26.399: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:28.402: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:30.402: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:32.403: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook May 6 22:09:32.409: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:32.412: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:09:34.413: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:34.417: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:09:36.413: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:36.418: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:09:38.413: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:38.416: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:09:40.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:40.415: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:09:42.413: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:42.416: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:09:44.413: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:44.416: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:09:46.414: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:46.417: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:09:48.412: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:09:48.415: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:48.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1045" for this suite. • [SLOW TEST:28.086 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":284,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:48.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename certificates STEP: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/certificates.k8s.io STEP: getting /apis/certificates.k8s.io/v1 STEP: creating STEP: getting STEP: listing STEP: watching May 6 22:09:49.048: INFO: starting watch STEP: patching STEP: updating May 6 22:09:49.057: INFO: waiting for watch events with expected annotations May 6 22:09:49.057: INFO: saw patched and updated annotations STEP: getting /approval STEP: patching /approval STEP: updating /approval STEP: getting /status STEP: patching /status STEP: updating /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:49.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "certificates-2285" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":22,"skipped":291,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:21.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 6 22:09:21.688: INFO: >>> kubeConfig: /root/.kube/config May 6 22:09:30.284: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:49.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9590" for this suite. • [SLOW TEST:27.845 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":6,"skipped":120,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:49.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-9255e8ca-e10a-40b3-a70f-0c40701cc62a STEP: Creating a pod to test consume secrets May 6 22:09:49.576: INFO: Waiting up to 5m0s for pod "pod-secrets-f01ce708-3a04-44d3-8c1a-2a3bb8e360d6" in namespace "secrets-5019" to be "Succeeded or Failed" May 6 22:09:49.579: INFO: Pod "pod-secrets-f01ce708-3a04-44d3-8c1a-2a3bb8e360d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.704624ms May 6 22:09:51.585: INFO: Pod "pod-secrets-f01ce708-3a04-44d3-8c1a-2a3bb8e360d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009328652s May 6 22:09:53.588: INFO: Pod "pod-secrets-f01ce708-3a04-44d3-8c1a-2a3bb8e360d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011998997s STEP: Saw pod success May 6 22:09:53.588: INFO: Pod "pod-secrets-f01ce708-3a04-44d3-8c1a-2a3bb8e360d6" satisfied condition "Succeeded or Failed" May 6 22:09:53.591: INFO: Trying to get logs from node node2 pod pod-secrets-f01ce708-3a04-44d3-8c1a-2a3bb8e360d6 container secret-volume-test: STEP: delete the pod May 6 22:09:53.606: INFO: Waiting for pod pod-secrets-f01ce708-3a04-44d3-8c1a-2a3bb8e360d6 to disappear May 6 22:09:53.608: INFO: Pod pod-secrets-f01ce708-3a04-44d3-8c1a-2a3bb8e360d6 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:53.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5019" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":134,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:27.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-configmap-pqnt STEP: Creating a pod to test atomic-volume-subpath May 6 22:09:28.033: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-pqnt" in namespace "subpath-4532" to be "Succeeded or Failed" May 6 22:09:28.035: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160421ms May 6 22:09:30.040: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007731601s May 6 22:09:32.045: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012138925s May 6 22:09:34.048: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01579865s May 6 22:09:36.053: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 8.020653736s May 6 22:09:38.057: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 10.024760091s May 6 22:09:40.062: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 12.028949934s May 6 22:09:42.065: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 14.032756175s May 6 22:09:44.071: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 16.038070725s May 6 22:09:46.076: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 18.043103591s May 6 22:09:48.079: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 20.046859784s May 6 22:09:50.084: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 22.05178296s May 6 22:09:52.088: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Running", Reason="", readiness=true. Elapsed: 24.055503377s May 6 22:09:54.094: INFO: Pod "pod-subpath-test-configmap-pqnt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.061468094s STEP: Saw pod success May 6 22:09:54.094: INFO: Pod "pod-subpath-test-configmap-pqnt" satisfied condition "Succeeded or Failed" May 6 22:09:54.097: INFO: Trying to get logs from node node1 pod pod-subpath-test-configmap-pqnt container test-container-subpath-configmap-pqnt: STEP: delete the pod May 6 22:09:54.113: INFO: Waiting for pod pod-subpath-test-configmap-pqnt to disappear May 6 22:09:54.115: INFO: Pod pod-subpath-test-configmap-pqnt no longer exists STEP: Deleting pod pod-subpath-test-configmap-pqnt May 6 22:09:54.115: INFO: Deleting pod "pod-subpath-test-configmap-pqnt" in namespace "subpath-4532" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:54.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4532" for this suite. • [SLOW TEST:26.153 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":135,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:54.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:54.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9042" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":9,"skipped":166,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:54.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:09:54.277: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be1db9f8-3aaa-41e3-8d80-01712900aefc" in namespace "projected-1850" to be "Succeeded or Failed" May 6 22:09:54.281: INFO: Pod "downwardapi-volume-be1db9f8-3aaa-41e3-8d80-01712900aefc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.765938ms May 6 22:09:56.284: INFO: Pod "downwardapi-volume-be1db9f8-3aaa-41e3-8d80-01712900aefc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006951076s May 6 22:09:58.287: INFO: Pod "downwardapi-volume-be1db9f8-3aaa-41e3-8d80-01712900aefc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010606338s STEP: Saw pod success May 6 22:09:58.288: INFO: Pod "downwardapi-volume-be1db9f8-3aaa-41e3-8d80-01712900aefc" satisfied condition "Succeeded or Failed" May 6 22:09:58.290: INFO: Trying to get logs from node node2 pod downwardapi-volume-be1db9f8-3aaa-41e3-8d80-01712900aefc container client-container: STEP: delete the pod May 6 22:09:58.304: INFO: Waiting for pod downwardapi-volume-be1db9f8-3aaa-41e3-8d80-01712900aefc to disappear May 6 22:09:58.307: INFO: Pod downwardapi-volume-be1db9f8-3aaa-41e3-8d80-01712900aefc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:09:58.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1850" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":178,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:58.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 6 22:09:58.594: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:09:58.606: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:10:00.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471798, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471798, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471798, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471798, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:10:03.628: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:03.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5592" for this suite. STEP: Destroying namespace "webhook-5592-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.413 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:49.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:09:49.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:09:51.485: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471789, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471789, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471789, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471789, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:09:54.494: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:06.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2601" for this suite. STEP: Destroying namespace "webhook-2601-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.424 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":23,"skipped":330,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:27.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics May 6 22:10:07.376: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 6 22:10:07.522: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: May 6 22:10:07.522: INFO: Deleting pod "simpletest.rc-5w7gd" in namespace "gc-1494" May 6 22:10:07.530: INFO: Deleting pod "simpletest.rc-7xrb7" in namespace "gc-1494" May 6 22:10:07.536: INFO: Deleting pod "simpletest.rc-dbwrn" in namespace "gc-1494" May 6 22:10:07.543: INFO: Deleting pod "simpletest.rc-h7grk" in namespace "gc-1494" May 6 22:10:07.549: INFO: Deleting pod "simpletest.rc-mbtrc" in namespace "gc-1494" May 6 22:10:07.555: INFO: Deleting pod "simpletest.rc-pjw2j" in namespace "gc-1494" May 6 22:10:07.561: INFO: Deleting pod "simpletest.rc-qd5qb" in namespace "gc-1494" May 6 22:10:07.568: INFO: Deleting pod "simpletest.rc-qmjpq" in namespace "gc-1494" May 6 22:10:07.574: INFO: Deleting pod "simpletest.rc-x29cs" in namespace "gc-1494" May 6 22:10:07.582: INFO: Deleting pod "simpletest.rc-x5tvw" in namespace "gc-1494" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:07.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1494" for this suite. • [SLOW TEST:40.302 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":10,"skipped":175,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:07.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Service STEP: watching for the Service to be added May 6 22:10:07.645: INFO: Found Service test-service-vccfw in namespace services-3036 with labels: map[test-service-static:true] & ports [{http TCP 80 {0 80 } 0}] May 6 22:10:07.645: INFO: Service test-service-vccfw created STEP: Getting /status May 6 22:10:07.649: INFO: Service test-service-vccfw has LoadBalancer: {[]} STEP: patching the ServiceStatus STEP: watching for the Service to be patched May 6 22:10:07.654: INFO: observed Service test-service-vccfw in namespace services-3036 with annotations: map[] & LoadBalancer: {[]} May 6 22:10:07.654: INFO: Found Service test-service-vccfw in namespace services-3036 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} May 6 22:10:07.654: INFO: Service test-service-vccfw has service status patched STEP: updating the ServiceStatus May 6 22:10:07.660: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} STEP: watching for the Service to be updated May 6 22:10:07.661: INFO: Observed Service test-service-vccfw in namespace services-3036 with annotations: map[] & Conditions: {[]} May 6 22:10:07.661: INFO: Observed event: &Service{ObjectMeta:{test-service-vccfw services-3036 01d7eb14-bb82-41df-89fe-454368c8d348 37134 0 2022-05-06 22:10:07 +0000 UTC map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-05-06 22:10:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}},"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.233.58.174,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.233.58.174],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} May 6 22:10:07.662: INFO: Found Service test-service-vccfw in namespace services-3036 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] May 6 22:10:07.662: INFO: Service test-service-vccfw has service status updated STEP: patching the service STEP: watching for the Service to be patched May 6 22:10:07.672: INFO: observed Service test-service-vccfw in namespace services-3036 with labels: map[test-service-static:true] May 6 22:10:07.672: INFO: observed Service test-service-vccfw in namespace services-3036 with labels: map[test-service-static:true] May 6 22:10:07.672: INFO: observed Service test-service-vccfw in namespace services-3036 with labels: map[test-service-static:true] May 6 22:10:07.673: INFO: Found Service test-service-vccfw in namespace services-3036 with labels: map[test-service:patched test-service-static:true] May 6 22:10:07.673: INFO: Service test-service-vccfw patched STEP: deleting the service STEP: watching for the Service to be deleted May 6 22:10:07.682: INFO: Observed event: ADDED May 6 22:10:07.682: INFO: Observed event: MODIFIED May 6 22:10:07.682: INFO: Observed event: MODIFIED May 6 22:10:07.682: INFO: Observed event: MODIFIED May 6 22:10:07.682: INFO: Found Service test-service-vccfw in namespace services-3036 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] May 6 22:10:07.682: INFO: Service test-service-vccfw deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:07.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3036" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":11,"skipped":179,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:07.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingress STEP: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 6 22:10:07.770: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 6 22:10:07.773: INFO: starting watch STEP: patching STEP: updating May 6 22:10:07.782: INFO: waiting for watch events with expected annotations May 6 22:10:07.782: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:07.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingress-1703" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":12,"skipped":186,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":11,"skipped":190,"failed":0} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:03.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-8899/configmap-test-9caa849b-baba-40ba-bca2-280131e9f42a STEP: Creating a pod to test consume configMaps May 6 22:10:03.799: INFO: Waiting up to 5m0s for pod "pod-configmaps-16f73833-dc66-4670-bb2f-a1eb472bb157" in namespace "configmap-8899" to be "Succeeded or Failed" May 6 22:10:03.802: INFO: Pod "pod-configmaps-16f73833-dc66-4670-bb2f-a1eb472bb157": Phase="Pending", Reason="", readiness=false. Elapsed: 3.240746ms May 6 22:10:05.806: INFO: Pod "pod-configmaps-16f73833-dc66-4670-bb2f-a1eb472bb157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007191057s May 6 22:10:07.809: INFO: Pod "pod-configmaps-16f73833-dc66-4670-bb2f-a1eb472bb157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010660738s STEP: Saw pod success May 6 22:10:07.809: INFO: Pod "pod-configmaps-16f73833-dc66-4670-bb2f-a1eb472bb157" satisfied condition "Succeeded or Failed" May 6 22:10:07.812: INFO: Trying to get logs from node node2 pod pod-configmaps-16f73833-dc66-4670-bb2f-a1eb472bb157 container env-test: STEP: delete the pod May 6 22:10:09.095: INFO: Waiting for pod pod-configmaps-16f73833-dc66-4670-bb2f-a1eb472bb157 to disappear May 6 22:10:09.097: INFO: Pod pod-configmaps-16f73833-dc66-4670-bb2f-a1eb472bb157 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:09.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8899" for this suite. • [SLOW TEST:5.346 seconds] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":190,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:00.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-77c0745e-3c7b-4c45-a7c8-4918350b04ab in namespace container-probe-3174 May 6 22:08:10.123: INFO: Started pod liveness-77c0745e-3c7b-4c45-a7c8-4918350b04ab in namespace container-probe-3174 STEP: checking the pod's current state and verifying that restartCount is present May 6 22:08:10.126: INFO: Initial restart count of pod liveness-77c0745e-3c7b-4c45-a7c8-4918350b04ab is 0 May 6 22:08:22.156: INFO: Restart count of pod container-probe-3174/liveness-77c0745e-3c7b-4c45-a7c8-4918350b04ab is now 1 (12.030443943s elapsed) May 6 22:08:44.207: INFO: Restart count of pod container-probe-3174/liveness-77c0745e-3c7b-4c45-a7c8-4918350b04ab is now 2 (34.081095899s elapsed) May 6 22:09:06.258: INFO: Restart count of pod container-probe-3174/liveness-77c0745e-3c7b-4c45-a7c8-4918350b04ab is now 3 (56.132589371s elapsed) May 6 22:09:52.352: INFO: Restart count of pod container-probe-3174/liveness-77c0745e-3c7b-4c45-a7c8-4918350b04ab is now 4 (1m42.225994054s elapsed) May 6 22:10:16.401: INFO: Restart count of pod container-probe-3174/liveness-77c0745e-3c7b-4c45-a7c8-4918350b04ab is now 5 (2m6.275365911s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:16.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3174" for this suite. • [SLOW TEST:136.332 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":118,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:06.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:10:06.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899" in namespace "downward-api-2121" to be "Succeeded or Failed" May 6 22:10:06.662: INFO: Pod "downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.827959ms May 6 22:10:08.666: INFO: Pod "downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006572615s May 6 22:10:10.669: INFO: Pod "downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009952847s May 6 22:10:12.674: INFO: Pod "downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014653388s May 6 22:10:14.677: INFO: Pod "downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018047899s May 6 22:10:16.682: INFO: Pod "downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022594439s STEP: Saw pod success May 6 22:10:16.682: INFO: Pod "downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899" satisfied condition "Succeeded or Failed" May 6 22:10:16.685: INFO: Trying to get logs from node node2 pod downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899 container client-container: STEP: delete the pod May 6 22:10:16.703: INFO: Waiting for pod downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899 to disappear May 6 22:10:16.705: INFO: Pod downwardapi-volume-4d8bfa43-a511-4e5d-a143-f6855d948899 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:16.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2121" for this suite. • [SLOW TEST:10.092 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":334,"failed":0} [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:16.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:16.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-1852" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":25,"skipped":334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:07.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics May 6 22:10:17.999: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 6 22:10:18.137: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: May 6 22:10:18.137: INFO: Deleting pod "simpletest-rc-to-be-deleted-58rpf" in namespace "gc-45" May 6 22:10:18.145: INFO: Deleting pod "simpletest-rc-to-be-deleted-6nttx" in namespace "gc-45" May 6 22:10:18.152: INFO: Deleting pod "simpletest-rc-to-be-deleted-85qz7" in namespace "gc-45" May 6 22:10:18.159: INFO: Deleting pod "simpletest-rc-to-be-deleted-c664t" in namespace "gc-45" May 6 22:10:18.166: INFO: Deleting pod "simpletest-rc-to-be-deleted-ct66x" in namespace "gc-45" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:18.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-45" for this suite. • [SLOW TEST:10.273 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":13,"skipped":229,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:09.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:10:09.578: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:10:11.589: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:10:13.592: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:10:15.592: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:10:17.594: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471809, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:10:20.602: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:21.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7406" for this suite. STEP: Destroying namespace "webhook-7406-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.529 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":13,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:16.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 22:10:16.490: INFO: Waiting up to 5m0s for pod "pod-5f0034d9-8679-4f81-add7-8093ca15cbfe" in namespace "emptydir-7087" to be "Succeeded or Failed" May 6 22:10:16.494: INFO: Pod "pod-5f0034d9-8679-4f81-add7-8093ca15cbfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448257ms May 6 22:10:18.499: INFO: Pod "pod-5f0034d9-8679-4f81-add7-8093ca15cbfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008907194s May 6 22:10:20.503: INFO: Pod "pod-5f0034d9-8679-4f81-add7-8093ca15cbfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01332564s May 6 22:10:22.507: INFO: Pod "pod-5f0034d9-8679-4f81-add7-8093ca15cbfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017750314s STEP: Saw pod success May 6 22:10:22.507: INFO: Pod "pod-5f0034d9-8679-4f81-add7-8093ca15cbfe" satisfied condition "Succeeded or Failed" May 6 22:10:22.510: INFO: Trying to get logs from node node2 pod pod-5f0034d9-8679-4f81-add7-8093ca15cbfe container test-container: STEP: delete the pod May 6 22:10:22.524: INFO: Waiting for pod pod-5f0034d9-8679-4f81-add7-8093ca15cbfe to disappear May 6 22:10:22.526: INFO: Pod pod-5f0034d9-8679-4f81-add7-8093ca15cbfe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:22.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7087" for this suite. • [SLOW TEST:6.081 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":133,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:16.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 6 22:10:16.919: INFO: The status of Pod labelsupdated10759bf-5f89-4a9a-a569-d55341eea63b is Pending, waiting for it to be Running (with Ready = true) May 6 22:10:18.923: INFO: The status of Pod labelsupdated10759bf-5f89-4a9a-a569-d55341eea63b is Pending, waiting for it to be Running (with Ready = true) May 6 22:10:20.924: INFO: The status of Pod labelsupdated10759bf-5f89-4a9a-a569-d55341eea63b is Pending, waiting for it to be Running (with Ready = true) May 6 22:10:22.922: INFO: The status of Pod labelsupdated10759bf-5f89-4a9a-a569-d55341eea63b is Running (Ready = true) May 6 22:10:23.441: INFO: Successfully updated pod "labelsupdated10759bf-5f89-4a9a-a569-d55341eea63b" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:25.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1372" for this suite. • [SLOW TEST:8.582 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":390,"failed":0} SS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:25.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Starting the proxy May 6 22:10:25.498: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5398 proxy --unix-socket=/tmp/kubectl-proxy-unix533027354/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:25.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5398" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":27,"skipped":392,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:25.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:25.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1754" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":28,"skipped":411,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:21.746: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 22:10:21.825: INFO: Waiting up to 5m0s for pod "pod-c6e79ffd-ff66-4b30-9855-1c73b05d85a8" in namespace "emptydir-1756" to be "Succeeded or Failed" May 6 22:10:21.828: INFO: Pod "pod-c6e79ffd-ff66-4b30-9855-1c73b05d85a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.240873ms May 6 22:10:23.832: INFO: Pod "pod-c6e79ffd-ff66-4b30-9855-1c73b05d85a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006672652s May 6 22:10:25.837: INFO: Pod "pod-c6e79ffd-ff66-4b30-9855-1c73b05d85a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011238253s STEP: Saw pod success May 6 22:10:25.837: INFO: Pod "pod-c6e79ffd-ff66-4b30-9855-1c73b05d85a8" satisfied condition "Succeeded or Failed" May 6 22:10:25.839: INFO: Trying to get logs from node node2 pod pod-c6e79ffd-ff66-4b30-9855-1c73b05d85a8 container test-container: STEP: delete the pod May 6 22:10:25.853: INFO: Waiting for pod pod-c6e79ffd-ff66-4b30-9855-1c73b05d85a8 to disappear May 6 22:10:25.855: INFO: Pod pod-c6e79ffd-ff66-4b30-9855-1c73b05d85a8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:25.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1756" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":237,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:22.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 22:10:22.704: INFO: Waiting up to 5m0s for pod "pod-ae163bf1-c739-4123-bbf4-5c016089ac31" in namespace "emptydir-1385" to be "Succeeded or Failed" May 6 22:10:22.706: INFO: Pod "pod-ae163bf1-c739-4123-bbf4-5c016089ac31": Phase="Pending", Reason="", readiness=false. Elapsed: 1.963144ms May 6 22:10:24.709: INFO: Pod "pod-ae163bf1-c739-4123-bbf4-5c016089ac31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005562384s May 6 22:10:26.714: INFO: Pod "pod-ae163bf1-c739-4123-bbf4-5c016089ac31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010392954s STEP: Saw pod success May 6 22:10:26.714: INFO: Pod "pod-ae163bf1-c739-4123-bbf4-5c016089ac31" satisfied condition "Succeeded or Failed" May 6 22:10:26.717: INFO: Trying to get logs from node node1 pod pod-ae163bf1-c739-4123-bbf4-5c016089ac31 container test-container: STEP: delete the pod May 6 22:10:26.729: INFO: Waiting for pod pod-ae163bf1-c739-4123-bbf4-5c016089ac31 to disappear May 6 22:10:26.731: INFO: Pod pod-ae163bf1-c739-4123-bbf4-5c016089ac31 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:26.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1385" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":202,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:53.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:27.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6713" for this suite. • [SLOW TEST:34.266 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 when starting a container that exits /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42 should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":143,"failed":0} SS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:25.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-66e0574e-d357-4743-8bce-65d5172c3c28 STEP: Creating a pod to test consume secrets May 6 22:10:25.797: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20a395e5-f68e-4614-a986-f3e69211900e" in namespace "projected-4282" to be "Succeeded or Failed" May 6 22:10:25.800: INFO: Pod "pod-projected-secrets-20a395e5-f68e-4614-a986-f3e69211900e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.807804ms May 6 22:10:27.804: INFO: Pod "pod-projected-secrets-20a395e5-f68e-4614-a986-f3e69211900e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006814068s May 6 22:10:29.807: INFO: Pod "pod-projected-secrets-20a395e5-f68e-4614-a986-f3e69211900e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009931411s STEP: Saw pod success May 6 22:10:29.807: INFO: Pod "pod-projected-secrets-20a395e5-f68e-4614-a986-f3e69211900e" satisfied condition "Succeeded or Failed" May 6 22:10:29.810: INFO: Trying to get logs from node node2 pod pod-projected-secrets-20a395e5-f68e-4614-a986-f3e69211900e container projected-secret-volume-test: STEP: delete the pod May 6 22:10:29.823: INFO: Waiting for pod pod-projected-secrets-20a395e5-f68e-4614-a986-f3e69211900e to disappear May 6 22:10:29.825: INFO: Pod pod-projected-secrets-20a395e5-f68e-4614-a986-f3e69211900e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:29.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4282" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":442,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:25.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:10:25.908: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc30436a-4430-44d6-a663-e4c4355f2cef" in namespace "downward-api-1918" to be "Succeeded or Failed" May 6 22:10:25.911: INFO: Pod "downwardapi-volume-cc30436a-4430-44d6-a663-e4c4355f2cef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.030753ms May 6 22:10:27.914: INFO: Pod "downwardapi-volume-cc30436a-4430-44d6-a663-e4c4355f2cef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006297826s May 6 22:10:29.918: INFO: Pod "downwardapi-volume-cc30436a-4430-44d6-a663-e4c4355f2cef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010454096s STEP: Saw pod success May 6 22:10:29.918: INFO: Pod "downwardapi-volume-cc30436a-4430-44d6-a663-e4c4355f2cef" satisfied condition "Succeeded or Failed" May 6 22:10:29.921: INFO: Trying to get logs from node node1 pod downwardapi-volume-cc30436a-4430-44d6-a663-e4c4355f2cef container client-container: STEP: delete the pod May 6 22:10:29.935: INFO: Waiting for pod downwardapi-volume-cc30436a-4430-44d6-a663-e4c4355f2cef to disappear May 6 22:10:29.937: INFO: Pod downwardapi-volume-cc30436a-4430-44d6-a663-e4c4355f2cef no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:29.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1918" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:29.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating cluster-info May 6 22:10:29.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1453 cluster-info' May 6 22:10:30.049: INFO: stderr: "" May 6 22:10:30.049: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.10.190.202:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:30.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1453" for this suite. •S ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":30,"skipped":449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:26.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:10:26.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4" in namespace "projected-4571" to be "Succeeded or Failed" May 6 22:10:26.784: INFO: Pod "downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.547773ms May 6 22:10:28.787: INFO: Pod "downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005853365s May 6 22:10:30.792: INFO: Pod "downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01068529s May 6 22:10:32.796: INFO: Pod "downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014923421s STEP: Saw pod success May 6 22:10:32.797: INFO: Pod "downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4" satisfied condition "Succeeded or Failed" May 6 22:10:32.799: INFO: Trying to get logs from node node2 pod downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4 container client-container: STEP: delete the pod May 6 22:10:32.813: INFO: Waiting for pod downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4 to disappear May 6 22:10:32.815: INFO: Pod downwardapi-volume-e30e2522-d408-4f93-a5ec-38eb37eb8ab4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:32.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4571" for this suite. • [SLOW TEST:6.076 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":206,"failed":0} S ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:18.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4432 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4432;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4432 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4432;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4432.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4432.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4432.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4432.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4432.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4432.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4432.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4432.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4432.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4432.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4432.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4432.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4432.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 227.36.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.36.227_udp@PTR;check="$$(dig +tcp +noall +answer +search 227.36.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.36.227_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4432 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4432;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4432 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4432;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4432.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4432.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4432.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4432.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4432.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4432.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4432.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4432.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4432.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4432.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4432.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4432.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4432.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 227.36.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.36.227_udp@PTR;check="$$(dig +tcp +noall +answer +search 227.36.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.36.227_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:10:28.295: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.297: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.301: INFO: Unable to read wheezy_udp@dns-test-service.dns-4432 from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.305: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4432 from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.308: INFO: Unable to read wheezy_udp@dns-test-service.dns-4432.svc from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.311: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4432.svc from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.314: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4432.svc from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.316: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4432.svc from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.335: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.338: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.340: INFO: Unable to read jessie_udp@dns-test-service.dns-4432 from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-4432 from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.345: INFO: Unable to read jessie_udp@dns-test-service.dns-4432.svc from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.349: INFO: Unable to read jessie_tcp@dns-test-service.dns-4432.svc from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.352: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4432.svc from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.354: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4432.svc from pod dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918: the server could not find the requested resource (get pods dns-test-515e1063-ff2c-4e83-9475-6573a4918918) May 6 22:10:28.369: INFO: Lookups using dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4432 wheezy_tcp@dns-test-service.dns-4432 wheezy_udp@dns-test-service.dns-4432.svc wheezy_tcp@dns-test-service.dns-4432.svc wheezy_udp@_http._tcp.dns-test-service.dns-4432.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4432.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4432 jessie_tcp@dns-test-service.dns-4432 jessie_udp@dns-test-service.dns-4432.svc jessie_tcp@dns-test-service.dns-4432.svc jessie_udp@_http._tcp.dns-test-service.dns-4432.svc jessie_tcp@_http._tcp.dns-test-service.dns-4432.svc] May 6 22:10:33.441: INFO: DNS probes using dns-4432/dns-test-515e1063-ff2c-4e83-9475-6573a4918918 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:33.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4432" for this suite. • [SLOW TEST:15.248 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":248,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:33.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:33.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-1181" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":15,"skipped":276,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:27.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override command May 6 22:10:27.944: INFO: Waiting up to 5m0s for pod "client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba" in namespace "containers-4085" to be "Succeeded or Failed" May 6 22:10:27.946: INFO: Pod "client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba": Phase="Pending", Reason="", readiness=false. Elapsed: 1.961053ms May 6 22:10:29.948: INFO: Pod "client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004502566s May 6 22:10:31.955: INFO: Pod "client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010848258s May 6 22:10:33.958: INFO: Pod "client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014642889s STEP: Saw pod success May 6 22:10:33.958: INFO: Pod "client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba" satisfied condition "Succeeded or Failed" May 6 22:10:33.961: INFO: Trying to get logs from node node2 pod client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba container agnhost-container: STEP: delete the pod May 6 22:10:33.974: INFO: Waiting for pod client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba to disappear May 6 22:10:33.976: INFO: Pod client-containers-878a34dd-45af-42f1-b9f8-25bcfc5f7dba no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:33.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4085" for this suite. • [SLOW TEST:6.071 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":145,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:30.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-map-77baed76-f6d8-4aa2-b0c1-5996266d0053 STEP: Creating a pod to test consume secrets May 6 22:10:30.109: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a68fa82a-bb02-4c1a-8879-0f905dccf2bf" in namespace "projected-9186" to be "Succeeded or Failed" May 6 22:10:30.111: INFO: Pod "pod-projected-secrets-a68fa82a-bb02-4c1a-8879-0f905dccf2bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010325ms May 6 22:10:32.114: INFO: Pod "pod-projected-secrets-a68fa82a-bb02-4c1a-8879-0f905dccf2bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004606835s May 6 22:10:34.118: INFO: Pod "pod-projected-secrets-a68fa82a-bb02-4c1a-8879-0f905dccf2bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008967941s STEP: Saw pod success May 6 22:10:34.118: INFO: Pod "pod-projected-secrets-a68fa82a-bb02-4c1a-8879-0f905dccf2bf" satisfied condition "Succeeded or Failed" May 6 22:10:34.121: INFO: Trying to get logs from node node1 pod pod-projected-secrets-a68fa82a-bb02-4c1a-8879-0f905dccf2bf container projected-secret-volume-test: STEP: delete the pod May 6 22:10:34.135: INFO: Waiting for pod pod-projected-secrets-a68fa82a-bb02-4c1a-8879-0f905dccf2bf to disappear May 6 22:10:34.137: INFO: Pod pod-projected-secrets-a68fa82a-bb02-4c1a-8879-0f905dccf2bf no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:34.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9186" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":311,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:30.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-76d5903c-e769-42a0-9588-465967ef5ec3 STEP: Creating a pod to test consume configMaps May 6 22:10:30.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-114fa81a-e8f7-4b95-9148-f5a474bd8b22" in namespace "projected-7098" to be "Succeeded or Failed" May 6 22:10:30.138: INFO: Pod "pod-projected-configmaps-114fa81a-e8f7-4b95-9148-f5a474bd8b22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14189ms May 6 22:10:32.143: INFO: Pod "pod-projected-configmaps-114fa81a-e8f7-4b95-9148-f5a474bd8b22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006467466s May 6 22:10:34.148: INFO: Pod "pod-projected-configmaps-114fa81a-e8f7-4b95-9148-f5a474bd8b22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011318461s STEP: Saw pod success May 6 22:10:34.148: INFO: Pod "pod-projected-configmaps-114fa81a-e8f7-4b95-9148-f5a474bd8b22" satisfied condition "Succeeded or Failed" May 6 22:10:34.150: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-114fa81a-e8f7-4b95-9148-f5a474bd8b22 container agnhost-container: STEP: delete the pod May 6 22:10:34.169: INFO: Waiting for pod pod-projected-configmaps-114fa81a-e8f7-4b95-9148-f5a474bd8b22 to disappear May 6 22:10:34.171: INFO: Pod pod-projected-configmaps-114fa81a-e8f7-4b95-9148-f5a474bd8b22 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:34.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7098" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":477,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:34.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 22:10:34.236: INFO: Waiting up to 5m0s for pod "pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf" in namespace "emptydir-720" to be "Succeeded or Failed" May 6 22:10:34.238: INFO: Pod "pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335738ms May 6 22:10:36.244: INFO: Pod "pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008122516s May 6 22:10:38.247: INFO: Pod "pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011400129s May 6 22:10:40.250: INFO: Pod "pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014873262s STEP: Saw pod success May 6 22:10:40.251: INFO: Pod "pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf" satisfied condition "Succeeded or Failed" May 6 22:10:40.253: INFO: Trying to get logs from node node2 pod pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf container test-container: STEP: delete the pod May 6 22:10:40.265: INFO: Waiting for pod pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf to disappear May 6 22:10:40.267: INFO: Pod pod-debce2e0-51cc-4bb4-a03c-da68f617e4bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:40.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-720" for this suite. • [SLOW TEST:6.106 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":323,"failed":0} SSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:32.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Creating a NodePort Service STEP: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota STEP: Ensuring resource quota status captures service creation STEP: Deleting Services STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:43.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2662" for this suite. • [SLOW TEST:11.096 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":8,"skipped":207,"failed":0} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:43.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of events May 6 22:10:43.974: INFO: created test-event-1 May 6 22:10:43.977: INFO: created test-event-2 May 6 22:10:43.980: INFO: created test-event-3 STEP: get a list of Events with a label in the current namespace STEP: delete collection of events May 6 22:10:43.982: INFO: requesting DeleteCollection of events STEP: check that the list of events matches the requested quantity May 6 22:10:44.003: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:44.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-8549" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":9,"skipped":217,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:33.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:10:34.018: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 22:10:42.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 --namespace=crd-publish-openapi-8940 create -f -' May 6 22:10:42.627: INFO: stderr: "" May 6 22:10:42.627: INFO: stdout: "e2e-test-crd-publish-openapi-6030-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 6 22:10:42.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 --namespace=crd-publish-openapi-8940 delete e2e-test-crd-publish-openapi-6030-crds test-cr' May 6 22:10:42.798: INFO: stderr: "" May 6 22:10:42.798: INFO: stdout: "e2e-test-crd-publish-openapi-6030-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 6 22:10:42.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 --namespace=crd-publish-openapi-8940 apply -f -' May 6 22:10:43.185: INFO: stderr: "" May 6 22:10:43.185: INFO: stdout: "e2e-test-crd-publish-openapi-6030-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 6 22:10:43.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 --namespace=crd-publish-openapi-8940 delete e2e-test-crd-publish-openapi-6030-crds test-cr' May 6 22:10:43.361: INFO: stderr: "" May 6 22:10:43.361: INFO: stdout: "e2e-test-crd-publish-openapi-6030-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 6 22:10:43.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8940 explain e2e-test-crd-publish-openapi-6030-crds' May 6 22:10:43.742: INFO: stderr: "" May 6 22:10:43.742: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6030-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:47.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8940" for this suite. • [SLOW TEST:13.405 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":10,"skipped":149,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:44.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:10:44.071: INFO: Creating simple deployment test-new-deployment May 6 22:10:44.079: INFO: deployment "test-new-deployment" doesn't have the required revision set May 6 22:10:46.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471844, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471844, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471844, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471844, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the deployment Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 6 22:10:48.114: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-3419 7c5c7e51-3537-4491-b5be-74cd35ef4345 38431 3 2022-05-06 22:10:44 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-05-06 22:10:44 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-06 22:10:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004819b28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-06 22:10:46 +0000 UTC,LastTransitionTime:2022-05-06 22:10:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2022-05-06 22:10:46 +0000 UTC,LastTransitionTime:2022-05-06 22:10:44 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 22:10:48.118: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-3419 4a23cd82-de0c-4080-8999-8c4551c2c886 38434 3 2022-05-06 22:10:44 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 7c5c7e51-3537-4491-b5be-74cd35ef4345 0xc0042ee447 0xc0042ee448}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:10:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7c5c7e51-3537-4491-b5be-74cd35ef4345\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0042ee4b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 22:10:48.121: INFO: Pod "test-new-deployment-847dcfb7fb-4d796" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-4d796 test-new-deployment-847dcfb7fb- deployment-3419 3ba949f1-289c-44f1-9ff3-882968a72550 38412 0 2022-05-06 22:10:44 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.49" ], "mac": "c2:18:3e:27:f6:a4", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.49" ], "mac": "c2:18:3e:27:f6:a4", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 4a23cd82-de0c-4080-8999-8c4551c2c886 0xc004819edf 0xc004819ef0}] [] [{kube-controller-manager Update v1 2022-05-06 22:10:44 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a23cd82-de0c-4080-8999-8c4551c2c886\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:10:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:10:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.49\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-42w75,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42w75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:10:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:10:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:10:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:10:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.49,StartTime:2022-05-06 22:10:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:10:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:docker://172eb42955a293845593c99b3faec510face4c803c3168d38d04d8f88f5d97d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:10:48.122: INFO: Pod "test-new-deployment-847dcfb7fb-74sqr" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-74sqr test-new-deployment-847dcfb7fb- deployment-3419 a30efba0-5df1-4273-b1bd-6a296eee693b 38436 0 2022-05-06 22:10:48 +0000 UTC map[name:httpd pod-template-hash:847dcfb7fb] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 4a23cd82-de0c-4080-8999-8c4551c2c886 0xc0048ec0df 0xc0048ec0f0}] [] [{kube-controller-manager Update v1 2022-05-06 22:10:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4a23cd82-de0c-4080-8999-8c4551c2c886\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v7xtj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v7xtj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:10:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:48.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3419" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":10,"skipped":233,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:40.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1386 STEP: creating an pod May 6 22:10:40.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6639 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' May 6 22:10:40.465: INFO: stderr: "" May 6 22:10:40.465: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for log generator to start. May 6 22:10:40.465: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 6 22:10:40.465: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6639" to be "running and ready, or succeeded" May 6 22:10:40.468: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178327ms May 6 22:10:42.471: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005865956s May 6 22:10:44.476: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.010643908s May 6 22:10:44.476: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 6 22:10:44.476: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 6 22:10:44.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6639 logs logs-generator logs-generator' May 6 22:10:44.658: INFO: stderr: "" May 6 22:10:44.658: INFO: stdout: "I0506 22:10:43.332071 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/mvtj 482\nI0506 22:10:43.533077 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/fwjm 329\nI0506 22:10:43.732484 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/tkm 408\nI0506 22:10:43.932874 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/4kd 519\nI0506 22:10:44.132170 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/vbfw 540\nI0506 22:10:44.332511 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/6xs 392\nI0506 22:10:44.532907 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/sqj 431\n" STEP: limiting log lines May 6 22:10:44.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6639 logs logs-generator logs-generator --tail=1' May 6 22:10:44.840: INFO: stderr: "" May 6 22:10:44.840: INFO: stdout: "I0506 22:10:44.732157 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/86z 448\n" May 6 22:10:44.840: INFO: got output "I0506 22:10:44.732157 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/86z 448\n" STEP: limiting log bytes May 6 22:10:44.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6639 logs logs-generator logs-generator --limit-bytes=1' May 6 22:10:45.016: INFO: stderr: "" May 6 22:10:45.016: INFO: stdout: "I" May 6 22:10:45.016: INFO: got output "I" STEP: exposing timestamps May 6 22:10:45.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6639 logs logs-generator logs-generator --tail=1 --timestamps' May 6 22:10:45.270: INFO: stderr: "" May 6 22:10:45.270: INFO: stdout: "2022-05-06T22:10:45.263760995Z I0506 22:10:45.263601 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/v4f 372\n" May 6 22:10:45.270: INFO: got output "2022-05-06T22:10:45.263760995Z I0506 22:10:45.263601 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/v4f 372\n" STEP: restricting to a time range May 6 22:10:47.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6639 logs logs-generator logs-generator --since=1s' May 6 22:10:47.950: INFO: stderr: "" May 6 22:10:47.950: INFO: stdout: "I0506 22:10:47.132228 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/f9m9 535\nI0506 22:10:47.332498 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/m57t 297\nI0506 22:10:47.532955 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/4vc9 308\nI0506 22:10:47.732189 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/2cp 327\nI0506 22:10:47.932489 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/bvh 509\n" May 6 22:10:47.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6639 logs logs-generator logs-generator --since=24h' May 6 22:10:48.121: INFO: stderr: "" May 6 22:10:48.122: INFO: stdout: "I0506 22:10:43.332071 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/mvtj 482\nI0506 22:10:43.533077 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/fwjm 329\nI0506 22:10:43.732484 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/tkm 408\nI0506 22:10:43.932874 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/4kd 519\nI0506 22:10:44.132170 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/vbfw 540\nI0506 22:10:44.332511 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/6xs 392\nI0506 22:10:44.532907 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/sqj 431\nI0506 22:10:44.732157 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/86z 448\nI0506 22:10:44.932395 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/gv6 385\nI0506 22:10:45.263601 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/v4f 372\nI0506 22:10:45.469832 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/4rd 295\nI0506 22:10:45.546468 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/sc6z 268\nI0506 22:10:45.732844 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/k6s 386\nI0506 22:10:45.932160 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/2sjf 328\nI0506 22:10:46.132497 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/qlv 588\nI0506 22:10:46.332958 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/g4qv 243\nI0506 22:10:46.532182 1 logs_generator.go:76] 16 POST /api/v1/namespaces/kube-system/pods/58d 342\nI0506 22:10:46.732503 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/6c6z 385\nI0506 22:10:46.932928 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/gv6d 412\nI0506 22:10:47.132228 1 logs_generator.go:76] 19 GET /api/v1/namespaces/default/pods/f9m9 535\nI0506 22:10:47.332498 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/m57t 297\nI0506 22:10:47.532955 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/4vc9 308\nI0506 22:10:47.732189 1 logs_generator.go:76] 22 POST /api/v1/namespaces/ns/pods/2cp 327\nI0506 22:10:47.932489 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/bvh 509\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391 May 6 22:10:48.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6639 delete pod logs-generator' May 6 22:10:56.797: INFO: stderr: "" May 6 22:10:56.797: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:56.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6639" for this suite. • [SLOW TEST:16.527 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":18,"skipped":326,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:56.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename podtemplate STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pod templates May 6 22:10:56.865: INFO: created test-podtemplate-1 May 6 22:10:56.868: INFO: created test-podtemplate-2 May 6 22:10:56.872: INFO: created test-podtemplate-3 STEP: get a list of pod templates with a label in the current namespace STEP: delete collection of pod templates May 6 22:10:56.874: INFO: requesting DeleteCollection of pod templates STEP: check that the list of pod templates matches the requested quantity May 6 22:10:56.883: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:10:56.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "podtemplate-3790" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":19,"skipped":339,"failed":0} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:56.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-d21ef893-c43e-4763-bfe8-f87b21dd8835 STEP: Creating a pod to test consume configMaps May 6 22:10:56.931: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ae7d1ef3-9a92-43b0-98f8-e085b84216ff" in namespace "projected-4143" to be "Succeeded or Failed" May 6 22:10:56.933: INFO: Pod "pod-projected-configmaps-ae7d1ef3-9a92-43b0-98f8-e085b84216ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263752ms May 6 22:10:58.937: INFO: Pod "pod-projected-configmaps-ae7d1ef3-9a92-43b0-98f8-e085b84216ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00613322s May 6 22:11:00.941: INFO: Pod "pod-projected-configmaps-ae7d1ef3-9a92-43b0-98f8-e085b84216ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010470079s STEP: Saw pod success May 6 22:11:00.941: INFO: Pod "pod-projected-configmaps-ae7d1ef3-9a92-43b0-98f8-e085b84216ff" satisfied condition "Succeeded or Failed" May 6 22:11:00.944: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-ae7d1ef3-9a92-43b0-98f8-e085b84216ff container projected-configmap-volume-test: STEP: delete the pod May 6 22:11:00.961: INFO: Waiting for pod pod-projected-configmaps-ae7d1ef3-9a92-43b0-98f8-e085b84216ff to disappear May 6 22:11:00.963: INFO: Pod pod-projected-configmaps-ae7d1ef3-9a92-43b0-98f8-e085b84216ff no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:00.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4143" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":339,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:47.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 6 22:10:47.436: INFO: >>> kubeConfig: /root/.kube/config May 6 22:10:56.091: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:16.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-210" for this suite. • [SLOW TEST:29.042 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":11,"skipped":153,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:01.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD May 6 22:11:01.029: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:30.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6812" for this suite. • [SLOW TEST:29.122 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":21,"skipped":357,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:16.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:11:16.497: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 22:11:25.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1906 --namespace=crd-publish-openapi-1906 create -f -' May 6 22:11:25.642: INFO: stderr: "" May 6 22:11:25.642: INFO: stdout: "e2e-test-crd-publish-openapi-3149-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 6 22:11:25.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1906 --namespace=crd-publish-openapi-1906 delete e2e-test-crd-publish-openapi-3149-crds test-cr' May 6 22:11:25.800: INFO: stderr: "" May 6 22:11:25.800: INFO: stdout: "e2e-test-crd-publish-openapi-3149-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 6 22:11:25.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1906 --namespace=crd-publish-openapi-1906 apply -f -' May 6 22:11:26.158: INFO: stderr: "" May 6 22:11:26.158: INFO: stdout: "e2e-test-crd-publish-openapi-3149-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 6 22:11:26.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1906 --namespace=crd-publish-openapi-1906 delete e2e-test-crd-publish-openapi-3149-crds test-cr' May 6 22:11:26.332: INFO: stderr: "" May 6 22:11:26.332: INFO: stdout: "e2e-test-crd-publish-openapi-3149-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 6 22:11:26.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1906 explain e2e-test-crd-publish-openapi-3149-crds' May 6 22:11:26.665: INFO: stderr: "" May 6 22:11:26.665: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3149-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:30.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1906" for this suite. • [SLOW TEST:13.899 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":12,"skipped":164,"failed":0} S ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:30.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:32.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-2169" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":13,"skipped":165,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:30.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-2c487257-de70-437a-bc23-5b0705cc37b6 STEP: Creating a pod to test consume secrets May 6 22:11:30.238: INFO: Waiting up to 5m0s for pod "pod-secrets-b7745356-0bd2-4882-8a2f-cd450c89f0c3" in namespace "secrets-2451" to be "Succeeded or Failed" May 6 22:11:30.240: INFO: Pod "pod-secrets-b7745356-0bd2-4882-8a2f-cd450c89f0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260717ms May 6 22:11:32.245: INFO: Pod "pod-secrets-b7745356-0bd2-4882-8a2f-cd450c89f0c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00661433s May 6 22:11:34.249: INFO: Pod "pod-secrets-b7745356-0bd2-4882-8a2f-cd450c89f0c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010734949s STEP: Saw pod success May 6 22:11:34.249: INFO: Pod "pod-secrets-b7745356-0bd2-4882-8a2f-cd450c89f0c3" satisfied condition "Succeeded or Failed" May 6 22:11:34.252: INFO: Trying to get logs from node node2 pod pod-secrets-b7745356-0bd2-4882-8a2f-cd450c89f0c3 container secret-volume-test: STEP: delete the pod May 6 22:11:34.267: INFO: Waiting for pod pod-secrets-b7745356-0bd2-4882-8a2f-cd450c89f0c3 to disappear May 6 22:11:34.269: INFO: Pod pod-secrets-b7745356-0bd2-4882-8a2f-cd450c89f0c3 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:34.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2451" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":401,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:15.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service nodeport-test with type=NodePort in namespace services-3718 STEP: creating replication controller nodeport-test in namespace services-3718 I0506 22:09:15.231590 30 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3718, replica count: 2 I0506 22:09:18.283315 30 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:09:21.284058 30 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:09:24.286515 30 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:09:24.286: INFO: Creating new exec pod May 6 22:09:33.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' May 6 22:09:33.578: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" May 6 22:09:33.578: INFO: stdout: "nodeport-test-s2p6l" May 6 22:09:33.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.58.138 80' May 6 22:09:33.851: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.58.138 80\nConnection to 10.233.58.138 80 port [tcp/http] succeeded!\n" May 6 22:09:33.851: INFO: stdout: "nodeport-test-s2p6l" May 6 22:09:33.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:34.187: INFO: rc: 1 May 6 22:09:34.187: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:35.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:35.698: INFO: rc: 1 May 6 22:09:35.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:36.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:36.439: INFO: rc: 1 May 6 22:09:36.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:37.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:37.423: INFO: rc: 1 May 6 22:09:37.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:38.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:38.858: INFO: rc: 1 May 6 22:09:38.858: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:39.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:39.446: INFO: rc: 1 May 6 22:09:39.446: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:40.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:40.454: INFO: rc: 1 May 6 22:09:40.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:41.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:41.665: INFO: rc: 1 May 6 22:09:41.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:42.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:42.425: INFO: rc: 1 May 6 22:09:42.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:43.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:43.456: INFO: rc: 1 May 6 22:09:43.456: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:44.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:44.425: INFO: rc: 1 May 6 22:09:44.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:45.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:45.424: INFO: rc: 1 May 6 22:09:45.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:46.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:46.431: INFO: rc: 1 May 6 22:09:46.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:47.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:47.426: INFO: rc: 1 May 6 22:09:47.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:48.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:48.417: INFO: rc: 1 May 6 22:09:48.417: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:49.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:49.432: INFO: rc: 1 May 6 22:09:49.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:50.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:50.432: INFO: rc: 1 May 6 22:09:50.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:51.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:51.604: INFO: rc: 1 May 6 22:09:51.604: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:52.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:52.435: INFO: rc: 1 May 6 22:09:52.435: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:53.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:53.472: INFO: rc: 1 May 6 22:09:53.472: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:54.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:54.482: INFO: rc: 1 May 6 22:09:54.482: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:55.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:55.657: INFO: rc: 1 May 6 22:09:55.657: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:56.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:56.468: INFO: rc: 1 May 6 22:09:56.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:57.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:57.446: INFO: rc: 1 May 6 22:09:57.446: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:58.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:58.504: INFO: rc: 1 May 6 22:09:58.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:59.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:09:59.444: INFO: rc: 1 May 6 22:09:59.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:00.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:00.508: INFO: rc: 1 May 6 22:10:00.508: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:01.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:01.442: INFO: rc: 1 May 6 22:10:01.442: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:02.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:02.435: INFO: rc: 1 May 6 22:10:02.435: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:03.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:03.439: INFO: rc: 1 May 6 22:10:03.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:04.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:04.506: INFO: rc: 1 May 6 22:10:04.506: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:05.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:05.579: INFO: rc: 1 May 6 22:10:05.579: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:06.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:06.538: INFO: rc: 1 May 6 22:10:06.538: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:07.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:07.480: INFO: rc: 1 May 6 22:10:07.480: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:08.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:09.288: INFO: rc: 1 May 6 22:10:09.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:10.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:10.709: INFO: rc: 1 May 6 22:10:10.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:11.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:11.529: INFO: rc: 1 May 6 22:10:11.529: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:12.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:13.223: INFO: rc: 1 May 6 22:10:13.223: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:14.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:14.464: INFO: rc: 1 May 6 22:10:14.464: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:15.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:15.820: INFO: rc: 1 May 6 22:10:15.820: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:16.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:16.470: INFO: rc: 1 May 6 22:10:16.470: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:17.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:17.550: INFO: rc: 1 May 6 22:10:17.550: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:18.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:18.497: INFO: rc: 1 May 6 22:10:18.497: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:19.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:19.563: INFO: rc: 1 May 6 22:10:19.563: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:20.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:20.979: INFO: rc: 1 May 6 22:10:20.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:21.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:21.617: INFO: rc: 1 May 6 22:10:21.617: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:22.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:22.429: INFO: rc: 1 May 6 22:10:22.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:23.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:23.474: INFO: rc: 1 May 6 22:10:23.474: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:24.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:24.599: INFO: rc: 1 May 6 22:10:24.599: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:25.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:25.724: INFO: rc: 1 May 6 22:10:25.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:26.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:26.525: INFO: rc: 1 May 6 22:10:26.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:27.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:27.440: INFO: rc: 1 May 6 22:10:27.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:28.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:28.626: INFO: rc: 1 May 6 22:10:28.626: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:29.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:29.428: INFO: rc: 1 May 6 22:10:29.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:30.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:30.481: INFO: rc: 1 May 6 22:10:30.481: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:31.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:31.535: INFO: rc: 1 May 6 22:10:31.535: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:32.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:32.429: INFO: rc: 1 May 6 22:10:32.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:33.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:33.510: INFO: rc: 1 May 6 22:10:33.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:34.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:34.462: INFO: rc: 1 May 6 22:10:34.462: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:35.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:35.534: INFO: rc: 1 May 6 22:10:35.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:36.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:36.681: INFO: rc: 1 May 6 22:10:36.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:37.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:37.445: INFO: rc: 1 May 6 22:10:37.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:38.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:39.332: INFO: rc: 1 May 6 22:10:39.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:40.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:40.667: INFO: rc: 1 May 6 22:10:40.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:41.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:41.540: INFO: rc: 1 May 6 22:10:41.540: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:42.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:42.685: INFO: rc: 1 May 6 22:10:42.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:43.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:43.448: INFO: rc: 1 May 6 22:10:43.448: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:44.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:44.454: INFO: rc: 1 May 6 22:10:44.454: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:45.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:45.573: INFO: rc: 1 May 6 22:10:45.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:46.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:46.451: INFO: rc: 1 May 6 22:10:46.451: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:47.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:47.436: INFO: rc: 1 May 6 22:10:47.436: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31277 + echo hostName nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:48.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:48.424: INFO: rc: 1 May 6 22:10:48.424: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:49.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:49.429: INFO: rc: 1 May 6 22:10:49.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:50.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:50.445: INFO: rc: 1 May 6 22:10:50.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:51.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:51.432: INFO: rc: 1 May 6 22:10:51.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:52.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:52.422: INFO: rc: 1 May 6 22:10:52.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:53.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:53.525: INFO: rc: 1 May 6 22:10:53.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:54.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:54.403: INFO: rc: 1 May 6 22:10:54.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:55.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:55.422: INFO: rc: 1 May 6 22:10:55.422: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:56.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:56.440: INFO: rc: 1 May 6 22:10:56.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:57.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:57.488: INFO: rc: 1 May 6 22:10:57.488: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:58.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:58.493: INFO: rc: 1 May 6 22:10:58.493: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:59.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:10:59.439: INFO: rc: 1 May 6 22:10:59.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:00.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:00.518: INFO: rc: 1 May 6 22:11:00.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:01.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:01.720: INFO: rc: 1 May 6 22:11:01.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:02.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:02.406: INFO: rc: 1 May 6 22:11:02.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:03.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:03.415: INFO: rc: 1 May 6 22:11:03.415: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:04.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:04.432: INFO: rc: 1 May 6 22:11:04.432: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:05.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:05.440: INFO: rc: 1 May 6 22:11:05.440: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:06.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:06.437: INFO: rc: 1 May 6 22:11:06.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:07.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:07.429: INFO: rc: 1 May 6 22:11:07.429: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:08.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:09.030: INFO: rc: 1 May 6 22:11:09.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:09.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:09.425: INFO: rc: 1 May 6 22:11:09.425: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:10.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:10.412: INFO: rc: 1 May 6 22:11:10.412: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:11.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:11.427: INFO: rc: 1 May 6 22:11:11.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:12.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:12.436: INFO: rc: 1 May 6 22:11:12.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:13.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:13.405: INFO: rc: 1 May 6 22:11:13.405: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:14.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:14.436: INFO: rc: 1 May 6 22:11:14.436: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:15.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:15.431: INFO: rc: 1 May 6 22:11:15.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:16.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:16.444: INFO: rc: 1 May 6 22:11:16.444: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:17.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:17.406: INFO: rc: 1 May 6 22:11:17.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:18.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:18.414: INFO: rc: 1 May 6 22:11:18.414: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:19.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:19.449: INFO: rc: 1 May 6 22:11:19.449: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:20.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:20.438: INFO: rc: 1 May 6 22:11:20.438: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:21.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:21.452: INFO: rc: 1 May 6 22:11:21.452: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:22.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:22.416: INFO: rc: 1 May 6 22:11:22.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:23.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:23.426: INFO: rc: 1 May 6 22:11:23.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:24.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:24.441: INFO: rc: 1 May 6 22:11:24.441: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:25.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:25.445: INFO: rc: 1 May 6 22:11:25.445: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:26.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:26.428: INFO: rc: 1 May 6 22:11:26.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:27.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:27.427: INFO: rc: 1 May 6 22:11:27.427: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:28.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:28.416: INFO: rc: 1 May 6 22:11:28.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:29.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:29.439: INFO: rc: 1 May 6 22:11:29.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:30.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:30.455: INFO: rc: 1 May 6 22:11:30.455: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:31.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:31.707: INFO: rc: 1 May 6 22:11:31.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:32.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:32.719: INFO: rc: 1 May 6 22:11:32.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:33.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:33.434: INFO: rc: 1 May 6 22:11:33.434: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:34.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:34.562: INFO: rc: 1 May 6 22:11:34.562: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:34.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277' May 6 22:11:34.792: INFO: rc: 1 May 6 22:11:34.792: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-3718 exec execpodrbzn2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31277: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31277 nc: connect to 10.10.190.207 port 31277 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:34.792: FAIL: Unexpected error: <*errors.errorString | 0xc00381ab10>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31277 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31277 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 +0x265 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001b02600) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001b02600) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001b02600, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-3718". STEP: Found 17 events. May 6 22:11:34.809: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodrbzn2: { } Scheduled: Successfully assigned services-3718/execpodrbzn2 to node2 May 6 22:11:34.809: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-s2p6l: { } Scheduled: Successfully assigned services-3718/nodeport-test-s2p6l to node1 May 6 22:11:34.809: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for nodeport-test-vgs5t: { } Scheduled: Successfully assigned services-3718/nodeport-test-vgs5t to node1 May 6 22:11:34.809: INFO: At 2022-05-06 22:09:15 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-s2p6l May 6 22:11:34.809: INFO: At 2022-05-06 22:09:15 +0000 UTC - event for nodeport-test: {replication-controller } SuccessfulCreate: Created pod: nodeport-test-vgs5t May 6 22:11:34.809: INFO: At 2022-05-06 22:09:20 +0000 UTC - event for nodeport-test-s2p6l: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:11:34.809: INFO: At 2022-05-06 22:09:20 +0000 UTC - event for nodeport-test-s2p6l: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 378.994828ms May 6 22:11:34.809: INFO: At 2022-05-06 22:09:20 +0000 UTC - event for nodeport-test-vgs5t: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:11:34.809: INFO: At 2022-05-06 22:09:21 +0000 UTC - event for nodeport-test-s2p6l: {kubelet node1} Started: Started container nodeport-test May 6 22:11:34.809: INFO: At 2022-05-06 22:09:21 +0000 UTC - event for nodeport-test-s2p6l: {kubelet node1} Created: Created container nodeport-test May 6 22:11:34.809: INFO: At 2022-05-06 22:09:21 +0000 UTC - event for nodeport-test-vgs5t: {kubelet node1} Started: Started container nodeport-test May 6 22:11:34.809: INFO: At 2022-05-06 22:09:21 +0000 UTC - event for nodeport-test-vgs5t: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 555.498278ms May 6 22:11:34.809: INFO: At 2022-05-06 22:09:21 +0000 UTC - event for nodeport-test-vgs5t: {kubelet node1} Created: Created container nodeport-test May 6 22:11:34.809: INFO: At 2022-05-06 22:09:26 +0000 UTC - event for execpodrbzn2: {kubelet node2} Started: Started container agnhost-container May 6 22:11:34.809: INFO: At 2022-05-06 22:09:26 +0000 UTC - event for execpodrbzn2: {kubelet node2} Created: Created container agnhost-container May 6 22:11:34.809: INFO: At 2022-05-06 22:09:26 +0000 UTC - event for execpodrbzn2: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:11:34.809: INFO: At 2022-05-06 22:09:26 +0000 UTC - event for execpodrbzn2: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 281.393112ms May 6 22:11:34.812: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:11:34.812: INFO: execpodrbzn2 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:24 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:24 +0000 UTC }] May 6 22:11:34.812: INFO: nodeport-test-s2p6l node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:15 +0000 UTC }] May 6 22:11:34.812: INFO: nodeport-test-vgs5t node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:09:15 +0000 UTC }] May 6 22:11:34.812: INFO: May 6 22:11:34.816: INFO: Logging node info for node master1 May 6 22:11:34.819: INFO: Node Info: &Node{ObjectMeta:{master1 3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 38796 0 2022-05-06 20:07:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:32 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:32 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:32 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:32 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:34.820: INFO: Logging kubelet events for node master1 May 6 22:11:34.822: INFO: Logging pods the kubelet thinks is on node master1 May 6 22:11:34.852: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.852: INFO: Container kube-scheduler ready: true, restart count 0 May 6 22:11:34.852: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.852: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:34.852: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:34.852: INFO: Init container install-cni ready: true, restart count 0 May 6 22:11:34.852: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:11:34.852: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.852: INFO: Container coredns ready: true, restart count 1 May 6 22:11:34.852: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded) May 6 22:11:34.852: INFO: Container docker-registry ready: true, restart count 0 May 6 22:11:34.852: INFO: Container nginx ready: true, restart count 0 May 6 22:11:34.852: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.852: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:11:34.852: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.852: INFO: Container kube-controller-manager ready: true, restart count 2 May 6 22:11:34.852: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.852: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:34.852: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:34.852: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:34.852: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:34.947: INFO: Latency metrics for node master1 May 6 22:11:34.947: INFO: Logging node info for node master2 May 6 22:11:34.949: INFO: Node Info: &Node{ObjectMeta:{master2 0aed38bc-6408-4920-b364-7d6b9bff7102 38754 0 2022-05-06 20:08:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:28 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:28 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:28 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:28 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:34.950: INFO: Logging kubelet events for node master2 May 6 22:11:34.952: INFO: Logging pods the kubelet thinks is on node master2 May 6 22:11:34.967: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:34.967: INFO: Init container install-cni ready: true, restart count 0 May 6 22:11:34.967: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:11:34.967: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.967: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:34.967: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.967: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:11:34.967: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.967: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:11:34.967: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.967: INFO: Container autoscaler ready: true, restart count 1 May 6 22:11:34.967: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:34.967: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:34.967: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:34.967: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.967: INFO: Container kube-controller-manager ready: true, restart count 1 May 6 22:11:34.967: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:34.967: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:35.049: INFO: Latency metrics for node master2 May 6 22:11:35.049: INFO: Logging node info for node master3 May 6 22:11:35.052: INFO: Node Info: &Node{ObjectMeta:{master3 1cc41c26-3708-4912-8ff5-aa83b70d989e 38743 0 2022-05-06 20:08:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:26 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:26 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:26 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:26 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:35.052: INFO: Logging kubelet events for node master3 May 6 22:11:35.054: INFO: Logging pods the kubelet thinks is on node master3 May 6 22:11:35.063: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.064: INFO: Container coredns ready: true, restart count 1 May 6 22:11:35.064: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.064: INFO: Container nfd-controller ready: true, restart count 0 May 6 22:11:35.064: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.064: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:11:35.064: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.064: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:35.064: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.064: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:35.064: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:35.064: INFO: Init container install-cni ready: true, restart count 2 May 6 22:11:35.064: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:11:35.064: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:35.064: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:35.064: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:35.064: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.064: INFO: Container kube-controller-manager ready: true, restart count 3 May 6 22:11:35.064: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.064: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:11:35.147: INFO: Latency metrics for node master3 May 6 22:11:35.147: INFO: Logging node info for node node1 May 6 22:11:35.150: INFO: Node Info: &Node{ObjectMeta:{node1 851b0a69-efd4-49b7-98ef-f0cfe2d311c6 38744 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:21:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:26 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:26 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:26 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:26 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:35.152: INFO: Logging kubelet events for node node1 May 6 22:11:35.154: INFO: Logging pods the kubelet thinks is on node node1 May 6 22:11:35.169: INFO: pod-configmaps-b1839ba6-a830-4fab-b0a8-f82580348828 started at 2022-05-06 22:10:34 +0000 UTC (0+3 container statuses recorded) May 6 22:11:35.169: INFO: Container createcm-volume-test ready: true, restart count 0 May 6 22:11:35.169: INFO: Container delcm-volume-test ready: true, restart count 0 May 6 22:11:35.169: INFO: Container updcm-volume-test ready: true, restart count 0 May 6 22:11:35.169: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.169: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:35.169: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:35.169: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:35.169: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:35.169: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:11:35.169: INFO: Container collectd ready: true, restart count 0 May 6 22:11:35.169: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:11:35.169: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:11:35.169: INFO: nodeport-test-vgs5t started at 2022-05-06 22:09:15 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.169: INFO: Container nodeport-test ready: true, restart count 0 May 6 22:11:35.169: INFO: affinity-nodeport-timeout-mhfx4 started at 2022-05-06 22:09:34 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.169: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 6 22:11:35.169: INFO: affinity-nodeport-timeout-m2l4n started at 2022-05-06 22:09:34 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.169: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 6 22:11:35.169: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded) May 6 22:11:35.169: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:35.169: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:11:35.169: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded) May 6 22:11:35.169: INFO: Container config-reloader ready: true, restart count 0 May 6 22:11:35.169: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:11:35.169: INFO: Container grafana ready: true, restart count 0 May 6 22:11:35.169: INFO: Container prometheus ready: true, restart count 1 May 6 22:11:35.169: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.169: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:11:35.169: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded) May 6 22:11:35.169: INFO: Container discover ready: false, restart count 0 May 6 22:11:35.169: INFO: Container init ready: false, restart count 0 May 6 22:11:35.169: INFO: Container install ready: false, restart count 0 May 6 22:11:35.169: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded) May 6 22:11:35.169: INFO: Container nodereport ready: true, restart count 0 May 6 22:11:35.169: INFO: Container reconcile ready: true, restart count 0 May 6 22:11:35.169: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.169: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:11:35.169: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.169: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:35.169: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:35.169: INFO: Init container install-cni ready: true, restart count 2 May 6 22:11:35.169: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:11:35.169: INFO: nodeport-test-s2p6l started at 2022-05-06 22:09:15 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.169: INFO: Container nodeport-test ready: true, restart count 0 May 6 22:11:35.170: INFO: pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04 started at 2022-05-06 22:11:32 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.170: INFO: Container test-container ready: false, restart count 0 May 6 22:11:35.170: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.170: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:11:35.170: INFO: test-pod started at 2022-05-06 22:06:49 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.170: INFO: Container webserver ready: true, restart count 0 May 6 22:11:35.350: INFO: Latency metrics for node node1 May 6 22:11:35.350: INFO: Logging node info for node node2 May 6 22:11:35.354: INFO: Node Info: &Node{ObjectMeta:{node2 2dab2a66-f2eb-49db-9725-3dda82cede11 38813 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:33 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:33 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:33 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:33 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:35.355: INFO: Logging kubelet events for node node2 May 6 22:11:35.356: INFO: Logging pods the kubelet thinks is on node node2 May 6 22:11:35.431: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:11:35.431: INFO: Container collectd ready: true, restart count 0 May 6 22:11:35.431: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:11:35.431: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:11:35.431: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded) May 6 22:11:35.431: INFO: Container discover ready: false, restart count 0 May 6 22:11:35.431: INFO: Container init ready: false, restart count 0 May 6 22:11:35.431: INFO: Container install ready: false, restart count 0 May 6 22:11:35.431: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:35.431: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:35.431: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:35.431: INFO: concurrent-27531251-5pr2p started at 2022-05-06 22:11:00 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container c ready: true, restart count 0 May 6 22:11:35.431: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:11:35.431: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:35.431: INFO: Init container install-cni ready: true, restart count 1 May 6 22:11:35.431: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:11:35.431: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:11:35.431: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:11:35.431: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:11:35.431: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:11:35.431: INFO: execpodrbzn2 started at 2022-05-06 22:09:24 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container agnhost-container ready: true, restart count 0 May 6 22:11:35.431: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:11:35.431: INFO: affinity-nodeport-timeout-cdlpk started at 2022-05-06 22:09:34 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container affinity-nodeport-timeout ready: true, restart count 0 May 6 22:11:35.431: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:35.431: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded) May 6 22:11:35.431: INFO: Container nodereport ready: true, restart count 0 May 6 22:11:35.431: INFO: Container reconcile ready: true, restart count 0 May 6 22:11:35.431: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container tas-extender ready: true, restart count 0 May 6 22:11:35.431: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:35.431: INFO: liveness-399021ca-6525-4b3a-a9e8-75adf8cfcfad started at 2022-05-06 22:09:34 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container agnhost-container ready: true, restart count 0 May 6 22:11:35.431: INFO: execpod-affinity4x5qq started at 2022-05-06 22:09:40 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container agnhost-container ready: true, restart count 0 May 6 22:11:35.431: INFO: pod-adoption-release started at 2022-05-06 22:11:34 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container pod-adoption-release ready: false, restart count 0 May 6 22:11:35.431: INFO: var-expansion-9b38f33d-4b21-49a6-923a-b1f5ceb02b4d started at 2022-05-06 22:10:33 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container dapi-container ready: false, restart count 0 May 6 22:11:35.431: INFO: test-webserver-349a0b41-df7a-4cb8-8f6a-67d535217175 started at 2022-05-06 22:08:04 +0000 UTC (0+1 container statuses recorded) May 6 22:11:35.431: INFO: Container test-webserver ready: true, restart count 0 May 6 22:11:35.756: INFO: Latency metrics for node node2 May 6 22:11:35.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3718" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [140.571 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to create a functioning NodePort service [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:11:34.792: Unexpected error: <*errors.errorString | 0xc00381ab10>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31277 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31277 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":14,"skipped":307,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:35.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a test event STEP: listing events in all namespaces STEP: listing events in test namespace STEP: listing events with field selection filtering on source STEP: listing events with field selection filtering on reportingController STEP: getting the test event STEP: patching the test event STEP: getting the test event STEP: updating the test event STEP: getting the test event STEP: deleting the test event STEP: listing events in all namespaces STEP: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:35.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7020" for this suite. • ------------------------------ {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":15,"skipped":326,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:35.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: validating api versions May 6 22:11:35.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6753 api-versions' May 6 22:11:36.057: INFO: stderr: "" May 6 22:11:36.057: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncustom.metrics.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nintel.com/v1\nk8s.cni.cncf.io/v1\nmonitoring.coreos.com/v1\nmonitoring.coreos.com/v1alpha1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\ntelemetry.intel.com/v1alpha1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:36.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6753" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":16,"skipped":336,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:32.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 22:11:32.506: INFO: Waiting up to 5m0s for pod "pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04" in namespace "emptydir-413" to be "Succeeded or Failed" May 6 22:11:32.508: INFO: Pod "pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04": Phase="Pending", Reason="", readiness=false. Elapsed: 1.859452ms May 6 22:11:34.512: INFO: Pod "pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005826971s May 6 22:11:36.518: INFO: Pod "pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011157777s STEP: Saw pod success May 6 22:11:36.518: INFO: Pod "pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04" satisfied condition "Succeeded or Failed" May 6 22:11:36.521: INFO: Trying to get logs from node node1 pod pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04 container test-container: STEP: delete the pod May 6 22:11:36.534: INFO: Waiting for pod pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04 to disappear May 6 22:11:36.536: INFO: Pod pod-37cc3c01-eb87-4575-a4e4-e9748a88dc04 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:36.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-413" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":181,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:34.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a Pod with a 'name' label pod-adoption-release is created May 6 22:11:34.324: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) May 6 22:11:36.328: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) May 6 22:11:38.327: INFO: The status of Pod pod-adoption-release is Running (Ready = true) STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 6 22:11:39.342: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:40.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8920" for this suite. • [SLOW TEST:6.079 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":23,"skipped":406,"failed":0} SSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:40.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:11:40.411: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-0de93a36-6974-40a6-9761-4407ef35460d" in namespace "security-context-test-8713" to be "Succeeded or Failed" May 6 22:11:40.413: INFO: Pod "busybox-readonly-false-0de93a36-6974-40a6-9761-4407ef35460d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177037ms May 6 22:11:42.417: INFO: Pod "busybox-readonly-false-0de93a36-6974-40a6-9761-4407ef35460d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006649351s May 6 22:11:44.422: INFO: Pod "busybox-readonly-false-0de93a36-6974-40a6-9761-4407ef35460d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011686074s May 6 22:11:44.423: INFO: Pod "busybox-readonly-false-0de93a36-6974-40a6-9761-4407ef35460d" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:44.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8713" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":409,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:36.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 6 22:11:36.581: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:44.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5864" for this suite. • [SLOW TEST:8.343 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":15,"skipped":187,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:44.461: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating Pod STEP: Reading file content from the nginx-container May 6 22:11:50.503: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-7164 PodName:pod-sharedvolume-0f38f08a-3c30-43ff-ac01-61f88ed83464 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:11:50.503: INFO: >>> kubeConfig: /root/.kube/config May 6 22:11:50.779: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:50.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7164" for this suite. • [SLOW TEST:6.329 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":25,"skipped":423,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:44.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-26567612-381d-4ac5-8574-8792567e1a00 STEP: Creating a pod to test consume secrets May 6 22:11:44.959: INFO: Waiting up to 5m0s for pod "pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2" in namespace "secrets-9269" to be "Succeeded or Failed" May 6 22:11:44.962: INFO: Pod "pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.851713ms May 6 22:11:46.966: INFO: Pod "pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006366557s May 6 22:11:48.970: INFO: Pod "pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010794385s May 6 22:11:50.975: INFO: Pod "pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015651114s STEP: Saw pod success May 6 22:11:50.975: INFO: Pod "pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2" satisfied condition "Succeeded or Failed" May 6 22:11:50.978: INFO: Trying to get logs from node node1 pod pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2 container secret-volume-test: STEP: delete the pod May 6 22:11:50.990: INFO: Waiting for pod pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2 to disappear May 6 22:11:50.993: INFO: Pod pod-secrets-c1541220-8af2-42d3-ae47-6519746ca5b2 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:50.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9269" for this suite. • [SLOW TEST:6.077 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:34.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name cm-test-opt-del-e039dcb8-d316-4457-965d-98ae5223a006 STEP: Creating configMap with name cm-test-opt-upd-8ed8409d-2268-4e5b-9459-8407f42e6980 STEP: Creating the pod May 6 22:10:34.242: INFO: The status of Pod pod-configmaps-b1839ba6-a830-4fab-b0a8-f82580348828 is Pending, waiting for it to be Running (with Ready = true) May 6 22:10:36.245: INFO: The status of Pod pod-configmaps-b1839ba6-a830-4fab-b0a8-f82580348828 is Pending, waiting for it to be Running (with Ready = true) May 6 22:10:38.246: INFO: The status of Pod pod-configmaps-b1839ba6-a830-4fab-b0a8-f82580348828 is Pending, waiting for it to be Running (with Ready = true) May 6 22:10:40.246: INFO: The status of Pod pod-configmaps-b1839ba6-a830-4fab-b0a8-f82580348828 is Running (Ready = true) STEP: Deleting configmap cm-test-opt-del-e039dcb8-d316-4457-965d-98ae5223a006 STEP: Updating configmap cm-test-opt-upd-8ed8409d-2268-4e5b-9459-8407f42e6980 STEP: Creating configMap with name cm-test-opt-create-ab63fa5b-42d7-4ed5-a7de-b78308340806 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:52.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5234" for this suite. • [SLOW TEST:78.465 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":480,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:50.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 22:11:53.875: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:53.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6992" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":441,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:51.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:11:51.086: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"369ec7f5-50da-416f-b0e7-dbbe54717474", Controller:(*bool)(0xc005d87eda), BlockOwnerDeletion:(*bool)(0xc005d87edb)}} May 6 22:11:51.090: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"98fd7e89-8057-4688-91cc-6205cc641e3f", Controller:(*bool)(0xc005c30e7a), BlockOwnerDeletion:(*bool)(0xc005c30e7b)}} May 6 22:11:51.094: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c9a81dc9-d748-4021-9a62-32c7cc0b2b1f", Controller:(*bool)(0xc0057b46e2), BlockOwnerDeletion:(*bool)(0xc0057b46e3)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:56.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9941" for this suite. • [SLOW TEST:5.082 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":17,"skipped":217,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:52.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 22:11:52.704: INFO: Waiting up to 5m0s for pod "pod-e980eee3-c0be-46ed-9c73-585ff0a864d0" in namespace "emptydir-5705" to be "Succeeded or Failed" May 6 22:11:52.706: INFO: Pod "pod-e980eee3-c0be-46ed-9c73-585ff0a864d0": Phase="Pending", Reason="", readiness=false. Elapsed: 1.867552ms May 6 22:11:54.710: INFO: Pod "pod-e980eee3-c0be-46ed-9c73-585ff0a864d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005609186s May 6 22:11:56.715: INFO: Pod "pod-e980eee3-c0be-46ed-9c73-585ff0a864d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010497121s STEP: Saw pod success May 6 22:11:56.715: INFO: Pod "pod-e980eee3-c0be-46ed-9c73-585ff0a864d0" satisfied condition "Succeeded or Failed" May 6 22:11:56.718: INFO: Trying to get logs from node node1 pod pod-e980eee3-c0be-46ed-9c73-585ff0a864d0 container test-container: STEP: delete the pod May 6 22:11:56.731: INFO: Waiting for pod pod-e980eee3-c0be-46ed-9c73-585ff0a864d0 to disappear May 6 22:11:56.733: INFO: Pod pod-e980eee3-c0be-46ed-9c73-585ff0a864d0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:56.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5705" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":487,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:25.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6080 May 6 22:09:25.937: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:27.940: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:29.941: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:31.947: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) May 6 22:09:33.941: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) May 6 22:09:33.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' May 6 22:09:34.195: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" May 6 22:09:34.195: INFO: stdout: "iptables" May 6 22:09:34.195: INFO: proxyMode: iptables May 6 22:09:34.204: INFO: Waiting for pod kube-proxy-mode-detector to disappear May 6 22:09:34.206: INFO: Pod kube-proxy-mode-detector no longer exists STEP: creating service affinity-nodeport-timeout in namespace services-6080 STEP: creating replication controller affinity-nodeport-timeout in namespace services-6080 I0506 22:09:34.217315 25 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-6080, replica count: 3 I0506 22:09:37.269121 25 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:09:40.269614 25 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:09:40.277: INFO: Creating new exec pod May 6 22:09:45.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' May 6 22:09:45.556: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" May 6 22:09:45.556: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:09:45.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.3.113 80' May 6 22:09:45.813: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.3.113 80\nConnection to 10.233.3.113 80 port [tcp/http] succeeded!\n" May 6 22:09:45.813: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:09:45.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:46.037: INFO: rc: 1 May 6 22:09:46.037: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:47.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:47.290: INFO: rc: 1 May 6 22:09:47.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:48.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:48.298: INFO: rc: 1 May 6 22:09:48.298: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:49.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:49.296: INFO: rc: 1 May 6 22:09:49.296: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:50.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:50.369: INFO: rc: 1 May 6 22:09:50.369: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:51.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:51.584: INFO: rc: 1 May 6 22:09:51.585: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:52.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:52.294: INFO: rc: 1 May 6 22:09:52.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:53.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:53.465: INFO: rc: 1 May 6 22:09:53.465: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:54.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:54.469: INFO: rc: 1 May 6 22:09:54.469: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:55.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:55.607: INFO: rc: 1 May 6 22:09:55.607: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:56.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:56.332: INFO: rc: 1 May 6 22:09:56.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:57.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:57.288: INFO: rc: 1 May 6 22:09:57.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:58.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:58.328: INFO: rc: 1 May 6 22:09:58.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:09:59.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:09:59.302: INFO: rc: 1 May 6 22:09:59.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:00.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:00.301: INFO: rc: 1 May 6 22:10:00.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:01.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:01.292: INFO: rc: 1 May 6 22:10:01.293: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:02.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:02.280: INFO: rc: 1 May 6 22:10:02.280: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:03.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:03.313: INFO: rc: 1 May 6 22:10:03.313: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:04.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:04.323: INFO: rc: 1 May 6 22:10:04.323: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:05.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:05.573: INFO: rc: 1 May 6 22:10:05.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:06.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:06.293: INFO: rc: 1 May 6 22:10:06.293: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:07.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:07.300: INFO: rc: 1 May 6 22:10:07.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:08.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:09.291: INFO: rc: 1 May 6 22:10:09.291: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:10.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:10.710: INFO: rc: 1 May 6 22:10:10.710: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:11.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:11.531: INFO: rc: 1 May 6 22:10:11.531: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:12.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:13.075: INFO: rc: 1 May 6 22:10:13.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:14.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:14.468: INFO: rc: 1 May 6 22:10:14.468: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:15.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:15.418: INFO: rc: 1 May 6 22:10:15.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:16.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:16.312: INFO: rc: 1 May 6 22:10:16.312: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:17.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:17.547: INFO: rc: 1 May 6 22:10:17.547: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:18.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:18.397: INFO: rc: 1 May 6 22:10:18.397: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:19.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:19.559: INFO: rc: 1 May 6 22:10:19.559: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:20.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:20.982: INFO: rc: 1 May 6 22:10:20.982: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:21.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:21.342: INFO: rc: 1 May 6 22:10:21.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:22.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:22.312: INFO: rc: 1 May 6 22:10:22.312: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:23.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:23.400: INFO: rc: 1 May 6 22:10:23.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:24.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:24.339: INFO: rc: 1 May 6 22:10:24.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:25.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:25.302: INFO: rc: 1 May 6 22:10:25.302: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:26.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:26.289: INFO: rc: 1 May 6 22:10:26.289: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:27.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:27.423: INFO: rc: 1 May 6 22:10:27.423: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:28.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:28.626: INFO: rc: 1 May 6 22:10:28.627: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:29.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:29.533: INFO: rc: 1 May 6 22:10:29.533: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:30.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:30.473: INFO: rc: 1 May 6 22:10:30.473: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:31.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:31.539: INFO: rc: 1 May 6 22:10:31.539: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:32.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:32.354: INFO: rc: 1 May 6 22:10:32.354: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:33.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:33.309: INFO: rc: 1 May 6 22:10:33.309: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:34.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:34.334: INFO: rc: 1 May 6 22:10:34.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:35.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:35.377: INFO: rc: 1 May 6 22:10:35.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:36.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:36.457: INFO: rc: 1 May 6 22:10:36.457: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:37.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:37.332: INFO: rc: 1 May 6 22:10:37.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:38.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:39.336: INFO: rc: 1 May 6 22:10:39.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:40.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:40.269: INFO: rc: 1 May 6 22:10:40.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:41.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:41.349: INFO: rc: 1 May 6 22:10:41.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:42.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:42.548: INFO: rc: 1 May 6 22:10:42.548: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:43.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:43.292: INFO: rc: 1 May 6 22:10:43.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:44.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:44.301: INFO: rc: 1 May 6 22:10:44.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:45.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:45.573: INFO: rc: 1 May 6 22:10:45.573: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:46.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:46.268: INFO: rc: 1 May 6 22:10:46.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:47.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:47.269: INFO: rc: 1 May 6 22:10:47.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:48.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:48.269: INFO: rc: 1 May 6 22:10:48.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:49.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:49.311: INFO: rc: 1 May 6 22:10:49.311: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:50.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:50.274: INFO: rc: 1 May 6 22:10:50.274: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:51.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:51.304: INFO: rc: 1 May 6 22:10:51.304: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:52.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:52.281: INFO: rc: 1 May 6 22:10:52.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:53.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:53.324: INFO: rc: 1 May 6 22:10:53.324: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:54.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:54.270: INFO: rc: 1 May 6 22:10:54.270: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:55.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:55.288: INFO: rc: 1 May 6 22:10:55.288: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:56.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:56.280: INFO: rc: 1 May 6 22:10:56.280: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:57.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:57.267: INFO: rc: 1 May 6 22:10:57.267: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:58.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:58.492: INFO: rc: 1 May 6 22:10:58.492: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:10:59.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:10:59.296: INFO: rc: 1 May 6 22:10:59.296: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:00.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:00.358: INFO: rc: 1 May 6 22:11:00.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:01.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:01.721: INFO: rc: 1 May 6 22:11:01.721: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:02.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:02.282: INFO: rc: 1 May 6 22:11:02.282: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:03.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:03.269: INFO: rc: 1 May 6 22:11:03.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:04.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:04.253: INFO: rc: 1 May 6 22:11:04.253: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:05.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:05.290: INFO: rc: 1 May 6 22:11:05.290: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:06.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:06.269: INFO: rc: 1 May 6 22:11:06.270: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:07.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:07.283: INFO: rc: 1 May 6 22:11:07.283: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:08.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:09.031: INFO: rc: 1 May 6 22:11:09.031: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:09.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:09.292: INFO: rc: 1 May 6 22:11:09.292: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:10.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:10.270: INFO: rc: 1 May 6 22:11:10.270: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:11.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:11.268: INFO: rc: 1 May 6 22:11:11.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:12.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:12.284: INFO: rc: 1 May 6 22:11:12.284: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:13.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:13.281: INFO: rc: 1 May 6 22:11:13.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:14.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:14.260: INFO: rc: 1 May 6 22:11:14.260: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:15.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:15.268: INFO: rc: 1 May 6 22:11:15.268: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:16.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:16.294: INFO: rc: 1 May 6 22:11:16.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:17.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:17.273: INFO: rc: 1 May 6 22:11:17.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:18.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:18.260: INFO: rc: 1 May 6 22:11:18.260: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:19.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:19.293: INFO: rc: 1 May 6 22:11:19.293: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:20.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:20.273: INFO: rc: 1 May 6 22:11:20.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:21.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:21.295: INFO: rc: 1 May 6 22:11:21.295: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:22.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:22.274: INFO: rc: 1 May 6 22:11:22.274: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:23.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:23.276: INFO: rc: 1 May 6 22:11:23.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:24.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:24.267: INFO: rc: 1 May 6 22:11:24.267: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:25.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:25.269: INFO: rc: 1 May 6 22:11:25.269: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:26.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:26.265: INFO: rc: 1 May 6 22:11:26.265: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:27.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:27.267: INFO: rc: 1 May 6 22:11:27.267: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:28.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:28.281: INFO: rc: 1 May 6 22:11:28.281: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:29.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:29.428: INFO: rc: 1 May 6 22:11:29.428: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:30.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:30.273: INFO: rc: 1 May 6 22:11:30.273: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:31.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:31.418: INFO: rc: 1 May 6 22:11:31.418: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:32.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:32.345: INFO: rc: 1 May 6 22:11:32.345: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:33.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:33.283: INFO: rc: 1 May 6 22:11:33.283: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 31883 + echo hostName nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:34.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:34.261: INFO: rc: 1 May 6 22:11:34.261: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:35.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:35.426: INFO: rc: 1 May 6 22:11:35.426: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:36.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:36.301: INFO: rc: 1 May 6 22:11:36.301: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:37.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:37.276: INFO: rc: 1 May 6 22:11:37.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:38.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:38.285: INFO: rc: 1 May 6 22:11:38.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:39.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:39.294: INFO: rc: 1 May 6 22:11:39.294: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:40.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:40.335: INFO: rc: 1 May 6 22:11:40.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:41.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:41.373: INFO: rc: 1 May 6 22:11:41.373: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:42.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:42.344: INFO: rc: 1 May 6 22:11:42.344: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:43.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:43.285: INFO: rc: 1 May 6 22:11:43.285: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:44.039: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:44.297: INFO: rc: 1 May 6 22:11:44.297: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:45.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:45.276: INFO: rc: 1 May 6 22:11:45.276: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:46.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:46.300: INFO: rc: 1 May 6 22:11:46.300: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:46.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883' May 6 22:11:46.538: INFO: rc: 1 May 6 22:11:46.538: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6080 exec execpod-affinity4x5qq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 31883: Command stdout: stderr: + + echonc -v hostName -t -w 2 10.10.190.207 31883 nc: connect to 10.10.190.207 port 31883 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:11:46.539: FAIL: Unexpected error: <*errors.errorString | 0xc000ffbb30>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31883 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31883 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc0011bf600, 0x77b33d8, 0xc003e35b80, 0xc001178a00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 +0x751 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1846 +0x9c k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0014b7500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0014b7500) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0014b7500, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 6 22:11:46.540: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-timeout in namespace services-6080, will wait for the garbage collector to delete the pods May 6 22:11:46.617: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 4.460244ms May 6 22:11:46.718: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.15297ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6080". STEP: Found 33 events. May 6 22:11:56.833: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-cdlpk: { } Scheduled: Successfully assigned services-6080/affinity-nodeport-timeout-cdlpk to node2 May 6 22:11:56.833: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-m2l4n: { } Scheduled: Successfully assigned services-6080/affinity-nodeport-timeout-m2l4n to node1 May 6 22:11:56.833: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-timeout-mhfx4: { } Scheduled: Successfully assigned services-6080/affinity-nodeport-timeout-mhfx4 to node1 May 6 22:11:56.833: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinity4x5qq: { } Scheduled: Successfully assigned services-6080/execpod-affinity4x5qq to node2 May 6 22:11:56.833: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for kube-proxy-mode-detector: { } Scheduled: Successfully assigned services-6080/kube-proxy-mode-detector to node2 May 6 22:11:56.833: INFO: At 2022-05-06 22:09:26 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:11:56.833: INFO: At 2022-05-06 22:09:27 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Created: Created container agnhost-container May 6 22:11:56.833: INFO: At 2022-05-06 22:09:27 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 319.628449ms May 6 22:11:56.833: INFO: At 2022-05-06 22:09:28 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Started: Started container agnhost-container May 6 22:11:56.833: INFO: At 2022-05-06 22:09:34 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-cdlpk May 6 22:11:56.833: INFO: At 2022-05-06 22:09:34 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-m2l4n May 6 22:11:56.833: INFO: At 2022-05-06 22:09:34 +0000 UTC - event for affinity-nodeport-timeout: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-timeout-mhfx4 May 6 22:11:56.833: INFO: At 2022-05-06 22:09:34 +0000 UTC - event for kube-proxy-mode-detector: {kubelet node2} Killing: Stopping container agnhost-container May 6 22:11:56.833: INFO: At 2022-05-06 22:09:36 +0000 UTC - event for affinity-nodeport-timeout-cdlpk: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:11:56.833: INFO: At 2022-05-06 22:09:36 +0000 UTC - event for affinity-nodeport-timeout-cdlpk: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 290.680509ms May 6 22:11:56.833: INFO: At 2022-05-06 22:09:36 +0000 UTC - event for affinity-nodeport-timeout-cdlpk: {kubelet node2} Created: Created container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:09:36 +0000 UTC - event for affinity-nodeport-timeout-mhfx4: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 404.647409ms May 6 22:11:56.833: INFO: At 2022-05-06 22:09:36 +0000 UTC - event for affinity-nodeport-timeout-mhfx4: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:11:56.833: INFO: At 2022-05-06 22:09:37 +0000 UTC - event for affinity-nodeport-timeout-cdlpk: {kubelet node2} Started: Started container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:09:37 +0000 UTC - event for affinity-nodeport-timeout-m2l4n: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:11:56.833: INFO: At 2022-05-06 22:09:37 +0000 UTC - event for affinity-nodeport-timeout-m2l4n: {kubelet node1} Started: Started container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:09:37 +0000 UTC - event for affinity-nodeport-timeout-m2l4n: {kubelet node1} Created: Created container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:09:37 +0000 UTC - event for affinity-nodeport-timeout-m2l4n: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 278.56755ms May 6 22:11:56.833: INFO: At 2022-05-06 22:09:37 +0000 UTC - event for affinity-nodeport-timeout-mhfx4: {kubelet node1} Created: Created container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:09:37 +0000 UTC - event for affinity-nodeport-timeout-mhfx4: {kubelet node1} Started: Started container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:09:41 +0000 UTC - event for execpod-affinity4x5qq: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:11:56.833: INFO: At 2022-05-06 22:09:42 +0000 UTC - event for execpod-affinity4x5qq: {kubelet node2} Started: Started container agnhost-container May 6 22:11:56.833: INFO: At 2022-05-06 22:09:42 +0000 UTC - event for execpod-affinity4x5qq: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 303.214067ms May 6 22:11:56.833: INFO: At 2022-05-06 22:09:42 +0000 UTC - event for execpod-affinity4x5qq: {kubelet node2} Created: Created container agnhost-container May 6 22:11:56.833: INFO: At 2022-05-06 22:11:46 +0000 UTC - event for affinity-nodeport-timeout-cdlpk: {kubelet node2} Killing: Stopping container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:11:46 +0000 UTC - event for affinity-nodeport-timeout-m2l4n: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:11:46 +0000 UTC - event for affinity-nodeport-timeout-mhfx4: {kubelet node1} Killing: Stopping container affinity-nodeport-timeout May 6 22:11:56.833: INFO: At 2022-05-06 22:11:46 +0000 UTC - event for execpod-affinity4x5qq: {kubelet node2} Killing: Stopping container agnhost-container May 6 22:11:56.835: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:11:56.835: INFO: May 6 22:11:56.839: INFO: Logging node info for node master1 May 6 22:11:56.841: INFO: Node Info: &Node{ObjectMeta:{master1 3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 39200 0 2022-05-06 20:07:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:52 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:52 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:52 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:52 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:56.842: INFO: Logging kubelet events for node master1 May 6 22:11:56.844: INFO: Logging pods the kubelet thinks is on node master1 May 6 22:11:56.865: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:56.865: INFO: Init container install-cni ready: true, restart count 0 May 6 22:11:56.865: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:11:56.865: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.865: INFO: Container coredns ready: true, restart count 1 May 6 22:11:56.865: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded) May 6 22:11:56.865: INFO: Container docker-registry ready: true, restart count 0 May 6 22:11:56.865: INFO: Container nginx ready: true, restart count 0 May 6 22:11:56.865: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.865: INFO: Container kube-scheduler ready: true, restart count 0 May 6 22:11:56.865: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.865: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:56.865: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.865: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:56.865: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:56.865: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:56.865: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:56.865: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.865: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:11:56.865: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.865: INFO: Container kube-controller-manager ready: true, restart count 2 May 6 22:11:56.950: INFO: Latency metrics for node master1 May 6 22:11:56.950: INFO: Logging node info for node master2 May 6 22:11:56.953: INFO: Node Info: &Node{ObjectMeta:{master2 0aed38bc-6408-4920-b364-7d6b9bff7102 39105 0 2022-05-06 20:08:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:48 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:48 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:48 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:48 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:56.953: INFO: Logging kubelet events for node master2 May 6 22:11:56.955: INFO: Logging pods the kubelet thinks is on node master2 May 6 22:11:56.964: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:56.964: INFO: Init container install-cni ready: true, restart count 0 May 6 22:11:56.964: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:11:56.964: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.964: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:56.964: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.964: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:11:56.964: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.964: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:11:56.964: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.964: INFO: Container autoscaler ready: true, restart count 1 May 6 22:11:56.964: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:56.964: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:56.964: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:56.964: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.964: INFO: Container kube-controller-manager ready: true, restart count 1 May 6 22:11:56.964: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:56.964: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:57.049: INFO: Latency metrics for node master2 May 6 22:11:57.049: INFO: Logging node info for node master3 May 6 22:11:57.052: INFO: Node Info: &Node{ObjectMeta:{master3 1cc41c26-3708-4912-8ff5-aa83b70d989e 39285 0 2022-05-06 20:08:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:56 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:56 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:56 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:56 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:57.052: INFO: Logging kubelet events for node master3 May 6 22:11:57.055: INFO: Logging pods the kubelet thinks is on node master3 May 6 22:11:57.064: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:57.064: INFO: Init container install-cni ready: true, restart count 2 May 6 22:11:57.064: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:11:57.064: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:57.064: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:57.064: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:57.064: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.064: INFO: Container kube-controller-manager ready: true, restart count 3 May 6 22:11:57.064: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.064: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:11:57.064: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.064: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:57.064: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.064: INFO: Container nfd-controller ready: true, restart count 0 May 6 22:11:57.064: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.064: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:11:57.064: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.064: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:57.064: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.064: INFO: Container coredns ready: true, restart count 1 May 6 22:11:57.150: INFO: Latency metrics for node master3 May 6 22:11:57.150: INFO: Logging node info for node node1 May 6 22:11:57.152: INFO: Node Info: &Node{ObjectMeta:{node1 851b0a69-efd4-49b7-98ef-f0cfe2d311c6 39316 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:21:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:56 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:56 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:56 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:56 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:57.153: INFO: Logging kubelet events for node node1 May 6 22:11:57.155: INFO: Logging pods the kubelet thinks is on node node1 May 6 22:11:57.170: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded) May 6 22:11:57.170: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:57.170: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:11:57.170: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded) May 6 22:11:57.170: INFO: Container config-reloader ready: true, restart count 0 May 6 22:11:57.170: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:11:57.170: INFO: Container grafana ready: true, restart count 0 May 6 22:11:57.170: INFO: Container prometheus ready: true, restart count 1 May 6 22:11:57.170: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.170: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:11:57.170: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded) May 6 22:11:57.170: INFO: Container discover ready: false, restart count 0 May 6 22:11:57.170: INFO: Container init ready: false, restart count 0 May 6 22:11:57.170: INFO: Container install ready: false, restart count 0 May 6 22:11:57.170: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded) May 6 22:11:57.170: INFO: Container nodereport ready: true, restart count 0 May 6 22:11:57.170: INFO: Container reconcile ready: true, restart count 0 May 6 22:11:57.170: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.170: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:11:57.170: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.170: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:57.170: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:57.170: INFO: Init container install-cni ready: true, restart count 2 May 6 22:11:57.170: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:11:57.170: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.170: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:11:57.170: INFO: test-pod started at 2022-05-06 22:06:49 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.170: INFO: Container webserver ready: true, restart count 0 May 6 22:11:57.170: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.170: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:57.170: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:57.170: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:57.170: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:57.170: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:11:57.170: INFO: Container collectd ready: true, restart count 0 May 6 22:11:57.170: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:11:57.170: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:11:57.170: INFO: pod-configmaps-b1839ba6-a830-4fab-b0a8-f82580348828 started at 2022-05-06 22:10:34 +0000 UTC (0+3 container statuses recorded) May 6 22:11:57.170: INFO: Container createcm-volume-test ready: true, restart count 0 May 6 22:11:57.170: INFO: Container delcm-volume-test ready: true, restart count 0 May 6 22:11:57.170: INFO: Container updcm-volume-test ready: true, restart count 0 May 6 22:11:57.170: INFO: downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e started at (0+0 container statuses recorded) May 6 22:11:57.360: INFO: Latency metrics for node node1 May 6 22:11:57.360: INFO: Logging node info for node node2 May 6 22:11:57.363: INFO: Node Info: &Node{ObjectMeta:{node2 2dab2a66-f2eb-49db-9725-3dda82cede11 39234 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:53 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:53 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:11:53 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:11:53 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:11:57.364: INFO: Logging kubelet events for node node2 May 6 22:11:57.365: INFO: Logging pods the kubelet thinks is on node node2 May 6 22:11:57.386: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container kube-multus ready: true, restart count 1 May 6 22:11:57.386: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded) May 6 22:11:57.386: INFO: Container nodereport ready: true, restart count 0 May 6 22:11:57.386: INFO: Container reconcile ready: true, restart count 0 May 6 22:11:57.386: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container tas-extender ready: true, restart count 0 May 6 22:11:57.386: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:11:57.386: INFO: liveness-399021ca-6525-4b3a-a9e8-75adf8cfcfad started at 2022-05-06 22:09:34 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container agnhost-container ready: true, restart count 0 May 6 22:11:57.386: INFO: pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19 started at 2022-05-06 22:11:53 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container env-test ready: false, restart count 0 May 6 22:11:57.386: INFO: var-expansion-9b38f33d-4b21-49a6-923a-b1f5ceb02b4d started at 2022-05-06 22:10:33 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container dapi-container ready: false, restart count 0 May 6 22:11:57.386: INFO: test-webserver-349a0b41-df7a-4cb8-8f6a-67d535217175 started at 2022-05-06 22:08:04 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container test-webserver ready: true, restart count 0 May 6 22:11:57.386: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:11:57.386: INFO: Container collectd ready: true, restart count 0 May 6 22:11:57.386: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:11:57.386: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:11:57.386: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded) May 6 22:11:57.386: INFO: Container discover ready: false, restart count 0 May 6 22:11:57.386: INFO: Container init ready: false, restart count 0 May 6 22:11:57.386: INFO: Container install ready: false, restart count 0 May 6 22:11:57.386: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:11:57.386: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:11:57.386: INFO: Container node-exporter ready: true, restart count 0 May 6 22:11:57.386: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:11:57.386: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:11:57.386: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:11:57.386: INFO: concurrent-27531251-5pr2p started at 2022-05-06 22:11:00 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container c ready: true, restart count 0 May 6 22:11:57.386: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:11:57.386: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:11:57.386: INFO: Init container install-cni ready: true, restart count 1 May 6 22:11:57.386: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:11:57.386: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:11:57.386: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:11:57.386: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:11:57.584: INFO: Latency metrics for node node2 May 6 22:11:57.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6080" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [151.694 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:11:46.539: Unexpected error: <*errors.errorString | 0xc000ffbb30>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31883 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:31883 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":14,"skipped":264,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:53.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap configmap-1868/configmap-test-2b72eb52-eecf-4495-ae2a-b6cf45c273ab STEP: Creating a pod to test consume configMaps May 6 22:11:53.961: INFO: Waiting up to 5m0s for pod "pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19" in namespace "configmap-1868" to be "Succeeded or Failed" May 6 22:11:53.963: INFO: Pod "pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.687332ms May 6 22:11:55.966: INFO: Pod "pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005067746s May 6 22:11:57.969: INFO: Pod "pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008239532s STEP: Saw pod success May 6 22:11:57.969: INFO: Pod "pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19" satisfied condition "Succeeded or Failed" May 6 22:11:57.972: INFO: Trying to get logs from node node2 pod pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19 container env-test: STEP: delete the pod May 6 22:11:57.985: INFO: Waiting for pod pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19 to disappear May 6 22:11:57.987: INFO: Pod pod-configmaps-0fec506b-6312-4f99-be02-5bf7ee8c4d19 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:11:57.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1868" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":455,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:48.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0506 22:10:48.185549 34 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: Ensuring more than one job is running at a time STEP: Ensuring at least two running jobs exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:00.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5661" for this suite. • [SLOW TEST:72.048 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":11,"skipped":250,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:56.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:11:56.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e" in namespace "projected-5496" to be "Succeeded or Failed" May 6 22:11:56.791: INFO: Pod "downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.54408ms May 6 22:11:58.796: INFO: Pod "downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010598926s May 6 22:12:00.799: INFO: Pod "downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014372899s May 6 22:12:02.803: INFO: Pod "downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018337757s STEP: Saw pod success May 6 22:12:02.803: INFO: Pod "downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e" satisfied condition "Succeeded or Failed" May 6 22:12:02.806: INFO: Trying to get logs from node node1 pod downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e container client-container: STEP: delete the pod May 6 22:12:02.821: INFO: Waiting for pod downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e to disappear May 6 22:12:02.823: INFO: Pod downwardapi-volume-bed39b48-eed8-4832-8793-0e05b44f528e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:02.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5496" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":490,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:57.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 6 22:11:57.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9572 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' May 6 22:11:57.809: INFO: stderr: "" May 6 22:11:57.809: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: replace the image in the pod with server-side dry-run May 6 22:11:57.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9572 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' May 6 22:11:58.235: INFO: stderr: "" May 6 22:11:58.235: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 6 22:11:58.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9572 delete pods e2e-test-httpd-pod' May 6 22:12:06.793: INFO: stderr: "" May 6 22:12:06.793: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:06.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9572" for this suite. • [SLOW TEST:9.179 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl server-side dry-run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903 should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":15,"skipped":274,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:02.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:12:03.164: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:12:05.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471923, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471923, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471923, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471923, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:12:08.185: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 6 22:12:08.198: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:08.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3830" for this suite. STEP: Destroying namespace "webhook-3830-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.382 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":35,"skipped":505,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:08:04.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod test-webserver-349a0b41-df7a-4cb8-8f6a-67d535217175 in namespace container-probe-1786 May 6 22:08:10.252: INFO: Started pod test-webserver-349a0b41-df7a-4cb8-8f6a-67d535217175 in namespace container-probe-1786 STEP: checking the pod's current state and verifying that restartCount is present May 6 22:08:10.255: INFO: Initial restart count of pod test-webserver-349a0b41-df7a-4cb8-8f6a-67d535217175 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:10.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1786" for this suite. • [SLOW TEST:246.557 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:06.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication May 6 22:12:07.285: INFO: role binding webhook-auth-reader already exists STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:12:07.296: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:12:09.305: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471927, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471927, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471927, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:12:12.319: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:12.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3280" for this suite. STEP: Destroying namespace "webhook-3280-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.584 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":16,"skipped":277,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:06:49.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset W0506 22:06:49.638740 37 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ May 6 22:06:49.639: INFO: Found PodSecurityPolicies; testing pod creation to see if PodSecurityPolicy is enabled May 6 22:06:49.642: INFO: Error creating dryrun pod; assuming PodSecurityPolicy is disabled: admission webhook "cmk.intel.com" does not support dry run STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-6640 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-6640 STEP: Creating statefulset with conflicting port in namespace statefulset-6640 STEP: Waiting until pod test-pod will start running in namespace statefulset-6640 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6640 May 6 22:12:01.692: FAIL: Pod ss-0 expected to be re-created at least once Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001903980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001903980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001903980, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 6 22:12:01.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6640 describe po test-pod' May 6 22:12:01.897: INFO: stderr: "" May 6 22:12:01.897: INFO: stdout: "Name: test-pod\nNamespace: statefulset-6640\nPriority: 0\nNode: node1/10.10.190.207\nStart Time: Fri, 06 May 2022 22:06:49 +0000\nLabels: \nAnnotations: k8s.v1.cni.cncf.io/network-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.160\"\n ],\n \"mac\": \"42:d7:6a:16:0c:2c\",\n \"default\": true,\n \"dns\": {}\n }]\n k8s.v1.cni.cncf.io/networks-status:\n [{\n \"name\": \"default-cni-network\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.3.160\"\n ],\n \"mac\": \"42:d7:6a:16:0c:2c\",\n \"default\": true,\n \"dns\": {}\n }]\n kubernetes.io/psp: privileged\nStatus: Running\nIP: 10.244.3.160\nIPs:\n IP: 10.244.3.160\nContainers:\n webserver:\n Container ID: docker://360e16080bd7d20c25ecfa707ebd55874ee10e4ab481deb312db8e68c8ddfa64\n Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\n Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Fri, 06 May 2022 22:06:59 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj4w6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-cj4w6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: \n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulling 5m9s kubelet Pulling image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\"\n Normal Pulled 5m3s kubelet Successfully pulled image \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\" in 6.408672521s\n Normal Created 5m2s kubelet Created container webserver\n Normal Started 5m2s kubelet Started container webserver\n" May 6 22:12:01.897: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-6640 Priority: 0 Node: node1/10.10.190.207 Start Time: Fri, 06 May 2022 22:06:49 +0000 Labels: Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.160" ], "mac": "42:d7:6a:16:0c:2c", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.3.160" ], "mac": "42:d7:6a:16:0c:2c", "default": true, "dns": {} }] kubernetes.io/psp: privileged Status: Running IP: 10.244.3.160 IPs: IP: 10.244.3.160 Containers: webserver: Container ID: docker://360e16080bd7d20c25ecfa707ebd55874ee10e4ab481deb312db8e68c8ddfa64 Image: k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Image ID: docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Fri, 06 May 2022 22:06:59 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cj4w6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-cj4w6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulling 5m9s kubelet Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" Normal Pulled 5m3s kubelet Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 6.408672521s Normal Created 5m2s kubelet Created container webserver Normal Started 5m2s kubelet Started container webserver May 6 22:12:01.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-6640 logs test-pod --tail=100' May 6 22:12:02.070: INFO: stderr: "" May 6 22:12:02.070: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.160. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.160. Set the 'ServerName' directive globally to suppress this message\n[Fri May 06 22:06:59.366160 2022] [mpm_event:notice] [pid 1:tid 140140315065192] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Fri May 06 22:06:59.366199 2022] [core:notice] [pid 1:tid 140140315065192] AH00094: Command line: 'httpd -D FOREGROUND'\n" May 6 22:12:02.070: INFO: Last 100 log lines of test-pod: AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.160. Set the 'ServerName' directive globally to suppress this message AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.3.160. Set the 'ServerName' directive globally to suppress this message [Fri May 06 22:06:59.366160 2022] [mpm_event:notice] [pid 1:tid 140140315065192] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations [Fri May 06 22:06:59.366199 2022] [core:notice] [pid 1:tid 140140315065192] AH00094: Command line: 'httpd -D FOREGROUND' May 6 22:12:02.070: INFO: Deleting all statefulset in ns statefulset-6640 May 6 22:12:02.072: INFO: Scaling statefulset ss to 0 May 6 22:12:02.083: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:12:12.091: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "statefulset-6640". STEP: Found 6 events. May 6 22:12:12.103: INFO: At 2022-05-06 22:06:49 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: []] May 6 22:12:12.103: INFO: At 2022-05-06 22:06:49 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: pods "ss-0" is forbidden: PodSecurityPolicy: unable to admit pod: [spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9100] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [] spec.containers[0].hostPort: Invalid value: 21017: Host port 21017 is not allowed to be used. Allowed ports: [9103-9104]] May 6 22:12:12.103: INFO: At 2022-05-06 22:06:52 +0000 UTC - event for test-pod: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" May 6 22:12:12.103: INFO: At 2022-05-06 22:06:58 +0000 UTC - event for test-pod: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" in 6.408672521s May 6 22:12:12.103: INFO: At 2022-05-06 22:06:59 +0000 UTC - event for test-pod: {kubelet node1} Created: Created container webserver May 6 22:12:12.103: INFO: At 2022-05-06 22:06:59 +0000 UTC - event for test-pod: {kubelet node1} Started: Started container webserver May 6 22:12:12.106: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:12:12.106: INFO: test-pod node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:06:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:06:59 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:06:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:06:49 +0000 UTC }] May 6 22:12:12.106: INFO: May 6 22:12:12.110: INFO: Logging node info for node master1 May 6 22:12:12.112: INFO: Node Info: &Node{ObjectMeta:{master1 3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 39475 0 2022-05-06 20:07:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:02 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:02 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:02 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:12:02 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:12:12.113: INFO: Logging kubelet events for node master1 May 6 22:12:12.115: INFO: Logging pods the kubelet thinks is on node master1 May 6 22:12:12.137: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.137: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:12:12.137: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.137: INFO: Container kube-controller-manager ready: true, restart count 2 May 6 22:12:12.137: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.138: INFO: Container kube-multus ready: true, restart count 1 May 6 22:12:12.138: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.138: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:12:12.138: INFO: Container node-exporter ready: true, restart count 0 May 6 22:12:12.138: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.138: INFO: Container docker-registry ready: true, restart count 0 May 6 22:12:12.138: INFO: Container nginx ready: true, restart count 0 May 6 22:12:12.138: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.138: INFO: Container kube-scheduler ready: true, restart count 0 May 6 22:12:12.138: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.138: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:12:12.138: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:12:12.138: INFO: Init container install-cni ready: true, restart count 0 May 6 22:12:12.138: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:12:12.138: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.138: INFO: Container coredns ready: true, restart count 1 May 6 22:12:12.222: INFO: Latency metrics for node master1 May 6 22:12:12.222: INFO: Logging node info for node master2 May 6 22:12:12.224: INFO: Node Info: &Node{ObjectMeta:{master2 0aed38bc-6408-4920-b364-7d6b9bff7102 39670 0 2022-05-06 20:08:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:08 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:08 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:08 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:12:08 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:12:12.225: INFO: Logging kubelet events for node master2 May 6 22:12:12.227: INFO: Logging pods the kubelet thinks is on node master2 May 6 22:12:12.233: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:12:12.233: INFO: Init container install-cni ready: true, restart count 0 May 6 22:12:12.233: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:12:12.233: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.233: INFO: Container kube-multus ready: true, restart count 1 May 6 22:12:12.233: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.233: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:12:12.233: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.233: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:12:12.233: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.233: INFO: Container autoscaler ready: true, restart count 1 May 6 22:12:12.233: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.233: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:12:12.233: INFO: Container node-exporter ready: true, restart count 0 May 6 22:12:12.233: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.233: INFO: Container kube-controller-manager ready: true, restart count 1 May 6 22:12:12.233: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.233: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:12:12.309: INFO: Latency metrics for node master2 May 6 22:12:12.309: INFO: Logging node info for node master3 May 6 22:12:12.316: INFO: Node Info: &Node{ObjectMeta:{master3 1cc41c26-3708-4912-8ff5-aa83b70d989e 39597 0 2022-05-06 20:08:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:06 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:06 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:06 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:12:06 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:12:12.316: INFO: Logging kubelet events for node master3 May 6 22:12:12.317: INFO: Logging pods the kubelet thinks is on node master3 May 6 22:12:12.327: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.327: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:12:12.327: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:12:12.327: INFO: Init container install-cni ready: true, restart count 2 May 6 22:12:12.327: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:12:12.327: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.327: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:12:12.327: INFO: Container node-exporter ready: true, restart count 0 May 6 22:12:12.327: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.327: INFO: Container kube-controller-manager ready: true, restart count 3 May 6 22:12:12.327: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.327: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:12:12.327: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.327: INFO: Container coredns ready: true, restart count 1 May 6 22:12:12.327: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.327: INFO: Container nfd-controller ready: true, restart count 0 May 6 22:12:12.327: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.327: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:12:12.327: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.327: INFO: Container kube-multus ready: true, restart count 1 May 6 22:12:12.418: INFO: Latency metrics for node master3 May 6 22:12:12.418: INFO: Logging node info for node node1 May 6 22:12:12.421: INFO: Node Info: &Node{ObjectMeta:{node1 851b0a69-efd4-49b7-98ef-f0cfe2d311c6 39618 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:21:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:06 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:06 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:06 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:12:06 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:12:12.422: INFO: Logging kubelet events for node node1 May 6 22:12:12.424: INFO: Logging pods the kubelet thinks is on node node1 May 6 22:12:12.437: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.437: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:12:12.437: INFO: test-pod started at 2022-05-06 22:06:49 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.437: INFO: Container webserver ready: true, restart count 0 May 6 22:12:12.437: INFO: ss-1 started at 2022-05-06 22:12:08 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.438: INFO: Container webserver ready: false, restart count 0 May 6 22:12:12.438: INFO: fail-once-local-4wt9d started at 2022-05-06 22:12:08 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.438: INFO: Container c ready: false, restart count 0 May 6 22:12:12.438: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.438: INFO: Container kube-multus ready: true, restart count 1 May 6 22:12:12.438: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.438: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:12:12.438: INFO: Container node-exporter ready: true, restart count 0 May 6 22:12:12.438: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:12:12.438: INFO: Container collectd ready: true, restart count 0 May 6 22:12:12.438: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:12:12.438: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:12:12.438: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.438: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:12:12.438: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:12:12.438: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded) May 6 22:12:12.438: INFO: Container config-reloader ready: true, restart count 0 May 6 22:12:12.438: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:12:12.438: INFO: Container grafana ready: true, restart count 0 May 6 22:12:12.438: INFO: Container prometheus ready: true, restart count 1 May 6 22:12:12.438: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.438: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:12:12.438: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded) May 6 22:12:12.438: INFO: Container discover ready: false, restart count 0 May 6 22:12:12.438: INFO: Container init ready: false, restart count 0 May 6 22:12:12.438: INFO: Container install ready: false, restart count 0 May 6 22:12:12.438: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.438: INFO: Container nodereport ready: true, restart count 0 May 6 22:12:12.438: INFO: Container reconcile ready: true, restart count 0 May 6 22:12:12.438: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.438: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:12:12.438: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.438: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:12:12.438: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:12:12.438: INFO: Init container install-cni ready: true, restart count 2 May 6 22:12:12.438: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:12:12.438: INFO: test-deployment-7b4c744884-2xn72 started at 2022-05-06 22:12:10 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.438: INFO: Container test-deployment ready: false, restart count 0 May 6 22:12:12.743: INFO: Latency metrics for node node1 May 6 22:12:12.743: INFO: Logging node info for node node2 May 6 22:12:12.747: INFO: Node Info: &Node{ObjectMeta:{node2 2dab2a66-f2eb-49db-9725-3dda82cede11 39560 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:04 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:04 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:12:04 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:12:04 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:12:12.747: INFO: Logging kubelet events for node node2 May 6 22:12:12.749: INFO: Logging pods the kubelet thinks is on node node2 May 6 22:12:12.770: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:12:12.770: INFO: Container collectd ready: true, restart count 0 May 6 22:12:12.770: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:12:12.770: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:12:12.770: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded) May 6 22:12:12.770: INFO: Container discover ready: false, restart count 0 May 6 22:12:12.770: INFO: Container init ready: false, restart count 0 May 6 22:12:12.770: INFO: Container install ready: false, restart count 0 May 6 22:12:12.770: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.770: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:12:12.770: INFO: Container node-exporter ready: true, restart count 0 May 6 22:12:12.770: INFO: replace-27531252-l2fsc started at 2022-05-06 22:12:00 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container c ready: true, restart count 0 May 6 22:12:12.770: INFO: fail-once-local-b7kq8 started at 2022-05-06 22:12:08 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container c ready: false, restart count 0 May 6 22:12:12.770: INFO: test-deployment-7b4c744884-t9g4x started at 2022-05-06 22:12:10 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container test-deployment ready: false, restart count 0 May 6 22:12:12.770: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:12:12.770: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:12:12.770: INFO: Init container install-cni ready: true, restart count 1 May 6 22:12:12.770: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:12:12.770: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:12:12.770: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:12:12.770: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:12:12.770: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:12:12.770: INFO: concurrent-27531251-5pr2p started at 2022-05-06 22:11:00 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container c ready: true, restart count 0 May 6 22:12:12.770: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:12:12.770: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container kube-multus ready: true, restart count 1 May 6 22:12:12.770: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded) May 6 22:12:12.770: INFO: Container nodereport ready: true, restart count 0 May 6 22:12:12.770: INFO: Container reconcile ready: true, restart count 0 May 6 22:12:12.770: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.770: INFO: Container tas-extender ready: true, restart count 0 May 6 22:12:12.771: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.771: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:12:12.771: INFO: liveness-399021ca-6525-4b3a-a9e8-75adf8cfcfad started at 2022-05-06 22:09:34 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.771: INFO: Container agnhost-container ready: true, restart count 0 May 6 22:12:12.771: INFO: ss-0 started at 2022-05-06 22:11:58 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.771: INFO: Container webserver ready: true, restart count 0 May 6 22:12:12.771: INFO: var-expansion-9b38f33d-4b21-49a6-923a-b1f5ceb02b4d started at 2022-05-06 22:10:33 +0000 UTC (0+1 container statuses recorded) May 6 22:12:12.771: INFO: Container dapi-container ready: false, restart count 0 May 6 22:12:13.092: INFO: Latency metrics for node node2 May 6 22:12:13.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6640" for this suite. • Failure [323.491 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Should recreate evicted statefulset [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:12:01.692: Pod ss-0 expected to be re-created at least once /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 ------------------------------ {"msg":"FAILED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":0,"skipped":1,"failed":1,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:08.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:24.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6685" for this suite. • [SLOW TEST:16.038 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":36,"skipped":523,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:12.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9527.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9527.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9527.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9527.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9527.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9527.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9527.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 151.44.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.44.151_udp@PTR;check="$$(dig +tcp +noall +answer +search 151.44.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.44.151_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9527.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9527.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9527.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9527.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9527.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9527.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9527.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9527.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9527.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 151.44.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.44.151_udp@PTR;check="$$(dig +tcp +noall +answer +search 151.44.233.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.233.44.151_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:12:20.468: INFO: Unable to read wheezy_udp@dns-test-service.dns-9527.svc.cluster.local from pod dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36: the server could not find the requested resource (get pods dns-test-2d026e83-c675-4270-b2a1-468635a04b36) May 6 22:12:20.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9527.svc.cluster.local from pod dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36: the server could not find the requested resource (get pods dns-test-2d026e83-c675-4270-b2a1-468635a04b36) May 6 22:12:20.475: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local from pod dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36: the server could not find the requested resource (get pods dns-test-2d026e83-c675-4270-b2a1-468635a04b36) May 6 22:12:20.478: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local from pod dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36: the server could not find the requested resource (get pods dns-test-2d026e83-c675-4270-b2a1-468635a04b36) May 6 22:12:20.496: INFO: Unable to read jessie_udp@dns-test-service.dns-9527.svc.cluster.local from pod dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36: the server could not find the requested resource (get pods dns-test-2d026e83-c675-4270-b2a1-468635a04b36) May 6 22:12:20.498: INFO: Unable to read jessie_tcp@dns-test-service.dns-9527.svc.cluster.local from pod dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36: the server could not find the requested resource (get pods dns-test-2d026e83-c675-4270-b2a1-468635a04b36) May 6 22:12:20.501: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local from pod dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36: the server could not find the requested resource (get pods dns-test-2d026e83-c675-4270-b2a1-468635a04b36) May 6 22:12:20.503: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local from pod dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36: the server could not find the requested resource (get pods dns-test-2d026e83-c675-4270-b2a1-468635a04b36) May 6 22:12:20.518: INFO: Lookups using dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36 failed for: [wheezy_udp@dns-test-service.dns-9527.svc.cluster.local wheezy_tcp@dns-test-service.dns-9527.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local jessie_udp@dns-test-service.dns-9527.svc.cluster.local jessie_tcp@dns-test-service.dns-9527.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9527.svc.cluster.local] May 6 22:12:25.572: INFO: DNS probes using dns-9527/dns-test-2d026e83-c675-4270-b2a1-468635a04b36 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:25.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9527" for this suite. • [SLOW TEST:13.192 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":17,"skipped":288,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:10.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Deployment STEP: waiting for Deployment to be created STEP: waiting for all Replicas to be Ready May 6 22:12:10.847: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 6 22:12:10.847: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 6 22:12:10.852: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 6 22:12:10.852: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 6 22:12:10.860: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 6 22:12:10.860: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 6 22:12:10.872: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 6 22:12:10.872: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 and labels map[test-deployment-static:true] May 6 22:12:14.565: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 6 22:12:14.565: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 and labels map[test-deployment-static:true] May 6 22:12:15.710: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 and labels map[test-deployment-static:true] STEP: patching the Deployment May 6 22:12:15.717: INFO: observed event type ADDED STEP: waiting for Replicas to scale May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 0 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:15.719: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:15.723: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:15.723: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:15.729: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:15.729: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:15.736: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:15.736: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:15.742: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:15.742: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:19.780: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:19.780: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:19.791: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 STEP: listing Deployments May 6 22:12:19.795: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] STEP: updating the Deployment May 6 22:12:19.806: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 STEP: fetching the DeploymentStatus May 6 22:12:19.813: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:19.813: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:19.817: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:19.825: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:19.828: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:24.443: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:24.456: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:24.470: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:24.476: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] May 6 22:12:26.931: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] STEP: patching the DeploymentStatus STEP: fetching the DeploymentStatus May 6 22:12:26.954: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:26.954: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:26.954: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:26.954: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:26.955: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 1 May 6 22:12:26.955: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:26.955: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 3 May 6 22:12:26.955: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:26.955: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 2 May 6 22:12:26.955: INFO: observed Deployment test-deployment in namespace deployment-4946 with ReadyReplicas 3 STEP: deleting the Deployment May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED May 6 22:12:26.961: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 6 22:12:26.963: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:26.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4946" for this suite. • [SLOW TEST:16.157 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":7,"skipped":155,"failed":0} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:26.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:12:27.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-4035 version' May 6 22:12:27.097: INFO: stderr: "" May 6 22:12:27.097: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.9\", GitCommit:\"b631974d68ac5045e076c86a5c66fba6f128dc72\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:51:12Z\", GoVersion:\"go1.16.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"21\", GitVersion:\"v1.21.1\", GitCommit:\"5e58841cce77d4bc13713ad2b91fa0d961e69192\", GitTreeState:\"clean\", BuildDate:\"2021-05-12T14:12:29Z\", GoVersion:\"go1.16.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:27.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4035" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":8,"skipped":156,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:58.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2077 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating statefulset ss in namespace statefulset-2077 May 6 22:11:58.032: INFO: Found 0 stateful pods, waiting for 1 May 6 22:12:08.037: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified STEP: Patch a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 6 22:12:08.057: INFO: Deleting all statefulset in ns statefulset-2077 May 6 22:12:08.059: INFO: Scaling statefulset ss to 0 May 6 22:12:28.070: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:12:28.073: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:28.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2077" for this suite. • [SLOW TEST:30.088 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:25.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:12:25.703: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:31.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7426" for this suite. • [SLOW TEST:5.550 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":18,"skipped":339,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:27.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating projection with secret that has name projected-secret-test-d65b1701-e4e0-4f1e-90f7-110a4d9980f9 STEP: Creating a pod to test consume secrets May 6 22:12:27.251: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1" in namespace "projected-4703" to be "Succeeded or Failed" May 6 22:12:27.253: INFO: Pod "pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096699ms May 6 22:12:29.256: INFO: Pod "pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004775931s May 6 22:12:31.260: INFO: Pod "pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009092371s May 6 22:12:33.263: INFO: Pod "pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011932467s STEP: Saw pod success May 6 22:12:33.263: INFO: Pod "pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1" satisfied condition "Succeeded or Failed" May 6 22:12:33.266: INFO: Trying to get logs from node node1 pod pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1 container projected-secret-volume-test: STEP: delete the pod May 6 22:12:33.307: INFO: Waiting for pod pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1 to disappear May 6 22:12:33.309: INFO: Pod pod-projected-secrets-357d05ee-271d-4f6b-ba09-d64fb2ce6dd1 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:33.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4703" for this suite. • [SLOW TEST:6.102 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:33.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting the auto-created API token May 6 22:12:33.931: INFO: created pod pod-service-account-defaultsa May 6 22:12:33.931: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 6 22:12:33.939: INFO: created pod pod-service-account-mountsa May 6 22:12:33.939: INFO: pod pod-service-account-mountsa service account token volume mount: true May 6 22:12:33.948: INFO: created pod pod-service-account-nomountsa May 6 22:12:33.948: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 6 22:12:33.957: INFO: created pod pod-service-account-defaultsa-mountspec May 6 22:12:33.957: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 6 22:12:33.966: INFO: created pod pod-service-account-mountsa-mountspec May 6 22:12:33.966: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 6 22:12:33.976: INFO: created pod pod-service-account-nomountsa-mountspec May 6 22:12:33.976: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 6 22:12:33.985: INFO: created pod pod-service-account-defaultsa-nomountspec May 6 22:12:33.985: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 6 22:12:33.993: INFO: created pod pod-service-account-mountsa-nomountspec May 6 22:12:33.993: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 6 22:12:34.001: INFO: created pod pod-service-account-nomountsa-nomountspec May 6 22:12:34.001: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:34.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3248" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:31.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-map-9dc15c98-a448-4cc1-bb75-caca8fd38d39 STEP: Creating a pod to test consume secrets May 6 22:12:31.328: INFO: Waiting up to 5m0s for pod "pod-secrets-5a099949-bc33-4bbd-b005-92e1235a9650" in namespace "secrets-7775" to be "Succeeded or Failed" May 6 22:12:31.330: INFO: Pod "pod-secrets-5a099949-bc33-4bbd-b005-92e1235a9650": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154368ms May 6 22:12:33.333: INFO: Pod "pod-secrets-5a099949-bc33-4bbd-b005-92e1235a9650": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004821873s May 6 22:12:35.336: INFO: Pod "pod-secrets-5a099949-bc33-4bbd-b005-92e1235a9650": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.007535553s STEP: Saw pod success May 6 22:12:35.336: INFO: Pod "pod-secrets-5a099949-bc33-4bbd-b005-92e1235a9650" satisfied condition "Succeeded or Failed" May 6 22:12:35.337: INFO: Trying to get logs from node node1 pod pod-secrets-5a099949-bc33-4bbd-b005-92e1235a9650 container secret-volume-test: STEP: delete the pod May 6 22:12:35.351: INFO: Waiting for pod pod-secrets-5a099949-bc33-4bbd-b005-92e1235a9650 to disappear May 6 22:12:35.353: INFO: Pod pod-secrets-5a099949-bc33-4bbd-b005-92e1235a9650 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:35.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7775" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":369,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:36.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 6 22:11:36.099: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3958 543e4069-edf0-43b8-8f1d-e1695b421193 38870 0 2022-05-06 22:11:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-06 22:11:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:11:36.099: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3958 543e4069-edf0-43b8-8f1d-e1695b421193 38870 0 2022-05-06 22:11:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-06 22:11:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 6 22:11:46.107: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3958 543e4069-edf0-43b8-8f1d-e1695b421193 39072 0 2022-05-06 22:11:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-06 22:11:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:11:46.107: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3958 543e4069-edf0-43b8-8f1d-e1695b421193 39072 0 2022-05-06 22:11:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-06 22:11:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 6 22:11:56.117: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3958 543e4069-edf0-43b8-8f1d-e1695b421193 39270 0 2022-05-06 22:11:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-06 22:11:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:11:56.118: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3958 543e4069-edf0-43b8-8f1d-e1695b421193 39270 0 2022-05-06 22:11:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-06 22:11:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 6 22:12:06.125: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3958 543e4069-edf0-43b8-8f1d-e1695b421193 39596 0 2022-05-06 22:11:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-06 22:11:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:12:06.126: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3958 543e4069-edf0-43b8-8f1d-e1695b421193 39596 0 2022-05-06 22:11:36 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-05-06 22:11:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 6 22:12:16.133: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3958 40b51e10-104e-4d50-bf02-cdbfcc0054bc 39955 0 2022-05-06 22:12:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-06 22:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:12:16.133: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3958 40b51e10-104e-4d50-bf02-cdbfcc0054bc 39955 0 2022-05-06 22:12:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-06 22:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 6 22:12:26.139: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3958 40b51e10-104e-4d50-bf02-cdbfcc0054bc 40240 0 2022-05-06 22:12:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-06 22:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} May 6 22:12:26.139: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3958 40b51e10-104e-4d50-bf02-cdbfcc0054bc 40240 0 2022-05-06 22:12:16 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-05-06 22:12:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:36.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3958" for this suite. • [SLOW TEST:60.076 seconds] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":17,"skipped":339,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:24.326: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:12:28.365: INFO: Deleting pod "var-expansion-e20ce569-92c1-4b46-97e9-8b5295b70abd" in namespace "var-expansion-9519" May 6 22:12:28.371: INFO: Wait up to 5m0s for pod "var-expansion-e20ce569-92c1-4b46-97e9-8b5295b70abd" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:38.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9519" for this suite. • [SLOW TEST:14.058 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":37,"skipped":530,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":10,"skipped":266,"failed":0} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:34.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 6 22:12:34.037: INFO: Pod name pod-release: Found 0 pods out of 1 May 6 22:12:39.041: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:40.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2134" for this suite. • [SLOW TEST:6.054 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":11,"skipped":266,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:00.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 6 22:12:00.243: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 6 22:12:18.586: INFO: >>> kubeConfig: /root/.kube/config May 6 22:12:27.290: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:45.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6552" for this suite. • [SLOW TEST:45.495 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":12,"skipped":254,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:40.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:12:40.106: INFO: Creating deployment "test-recreate-deployment" May 6 22:12:40.109: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 6 22:12:40.115: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 6 22:12:42.121: INFO: Waiting deployment "test-recreate-deployment" to complete May 6 22:12:42.123: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471960, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471960, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471960, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471960, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:12:44.126: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471960, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471960, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471960, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471960, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6cb8b65c46\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:12:46.126: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 6 22:12:46.132: INFO: Updating deployment test-recreate-deployment May 6 22:12:46.133: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 6 22:12:46.168: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-4033 486f57e8-6845-4c11-b68e-c5046173e113 40946 2 2022-05-06 22:12:40 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-06 22:12:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-06 22:12:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00410af48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-05-06 22:12:46 +0000 UTC,LastTransitionTime:2022-05-06 22:12:46 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-85d47dcb4" is progressing.,LastUpdateTime:2022-05-06 22:12:46 +0000 UTC,LastTransitionTime:2022-05-06 22:12:40 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 6 22:12:46.171: INFO: New ReplicaSet "test-recreate-deployment-85d47dcb4" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-85d47dcb4 deployment-4033 8c6f6e6a-5cfd-4e89-bb3e-241e86c5d350 40945 1 2022-05-06 22:12:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 486f57e8-6845-4c11-b68e-c5046173e113 0xc00410b3b0 0xc00410b3b1}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:12:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"486f57e8-6845-4c11-b68e-c5046173e113\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 85d47dcb4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00410b428 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 22:12:46.172: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 6 22:12:46.172: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-6cb8b65c46 deployment-4033 2d467dd8-a68e-4f28-993a-03a9843a91c3 40934 2 2022-05-06 22:12:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 486f57e8-6845-4c11-b68e-c5046173e113 0xc00410b2b7 0xc00410b2b8}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:12:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"486f57e8-6845-4c11-b68e-c5046173e113\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6cb8b65c46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:6cb8b65c46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00410b348 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 22:12:46.175: INFO: Pod "test-recreate-deployment-85d47dcb4-77rkg" is not available: &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-77rkg test-recreate-deployment-85d47dcb4- deployment-4033 7dc777dc-1503-4e38-8c98-34333d8ba156 40947 0 2022-05-06 22:12:46 +0000 UTC map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 8c6f6e6a-5cfd-4e89-bb3e-241e86c5d350 0xc00410b85f 0xc00410b870}] [] [{kube-controller-manager Update v1 2022-05-06 22:12:46 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8c6f6e6a-5cfd-4e89-bb3e-241e86c5d350\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2022-05-06 22:12:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-98nc8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-98nc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:12:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:12:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:12:46 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:12:46 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.207,PodIP:,StartTime:2022-05-06 22:12:46 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:46.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4033" for this suite. • [SLOW TEST:6.098 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":12,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:36.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics May 6 22:12:46.242: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 6 22:12:46.428: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:46.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6474" for this suite. • [SLOW TEST:10.268 seconds] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":18,"skipped":346,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:45.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a Pod with a static label STEP: watching for Pod to be ready May 6 22:12:45.771: INFO: observed Pod pod-test in namespace pods-6847 in phase Pending with labels: map[test-pod-static:true] & conditions [] May 6 22:12:45.774: INFO: observed Pod pod-test in namespace pods-6847 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC }] May 6 22:12:45.790: INFO: observed Pod pod-test in namespace pods-6847 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC }] May 6 22:12:47.716: INFO: observed Pod pod-test in namespace pods-6847 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC }] May 6 22:12:50.131: INFO: Found Pod pod-test in namespace pods-6847 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:49 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:49 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:45 +0000 UTC }] STEP: patching the Pod with a new Label and updated data May 6 22:12:50.143: INFO: observed event type ADDED STEP: getting the Pod and ensuring that it's patched STEP: getting the PodStatus STEP: replacing the Pod's status Ready condition to False STEP: check the Pod again to ensure its Ready conditions are False STEP: deleting the Pod via a Collection with a LabelSelector STEP: watching for the Pod to be deleted May 6 22:12:50.164: INFO: observed event type ADDED May 6 22:12:50.164: INFO: observed event type MODIFIED May 6 22:12:50.164: INFO: observed event type MODIFIED May 6 22:12:50.164: INFO: observed event type MODIFIED May 6 22:12:50.164: INFO: observed event type MODIFIED May 6 22:12:50.164: INFO: observed event type MODIFIED May 6 22:12:50.164: INFO: observed event type MODIFIED May 6 22:12:50.164: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:50.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6847" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":13,"skipped":255,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:46.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:12:46.514: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4" in namespace "projected-1069" to be "Succeeded or Failed" May 6 22:12:46.516: INFO: Pod "downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164727ms May 6 22:12:48.520: INFO: Pod "downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005887529s May 6 22:12:50.524: INFO: Pod "downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01007515s May 6 22:12:52.528: INFO: Pod "downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014192673s STEP: Saw pod success May 6 22:12:52.528: INFO: Pod "downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4" satisfied condition "Succeeded or Failed" May 6 22:12:52.531: INFO: Trying to get logs from node node1 pod downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4 container client-container: STEP: delete the pod May 6 22:12:52.543: INFO: Waiting for pod downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4 to disappear May 6 22:12:52.545: INFO: Pod downwardapi-volume-4b8081b4-9952-4e65-8914-47855b9751c4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:52.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1069" for this suite. • [SLOW TEST:6.080 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":371,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:46.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:53.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4883" for this suite. • [SLOW TEST:7.041 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":13,"skipped":298,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:38.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:12:38.430: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 6 22:12:47.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 --namespace=crd-publish-openapi-3804 create -f -' May 6 22:12:48.096: INFO: stderr: "" May 6 22:12:48.096: INFO: stdout: "e2e-test-crd-publish-openapi-1331-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 6 22:12:48.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 --namespace=crd-publish-openapi-3804 delete e2e-test-crd-publish-openapi-1331-crds test-foo' May 6 22:12:48.282: INFO: stderr: "" May 6 22:12:48.282: INFO: stdout: "e2e-test-crd-publish-openapi-1331-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 6 22:12:48.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 --namespace=crd-publish-openapi-3804 apply -f -' May 6 22:12:48.652: INFO: stderr: "" May 6 22:12:48.652: INFO: stdout: "e2e-test-crd-publish-openapi-1331-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 6 22:12:48.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 --namespace=crd-publish-openapi-3804 delete e2e-test-crd-publish-openapi-1331-crds test-foo' May 6 22:12:48.809: INFO: stderr: "" May 6 22:12:48.809: INFO: stdout: "e2e-test-crd-publish-openapi-1331-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 6 22:12:48.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 --namespace=crd-publish-openapi-3804 create -f -' May 6 22:12:49.139: INFO: rc: 1 May 6 22:12:49.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 --namespace=crd-publish-openapi-3804 apply -f -' May 6 22:12:49.454: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 6 22:12:49.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 --namespace=crd-publish-openapi-3804 create -f -' May 6 22:12:49.793: INFO: rc: 1 May 6 22:12:49.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 --namespace=crd-publish-openapi-3804 apply -f -' May 6 22:12:50.127: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 6 22:12:50.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 explain e2e-test-crd-publish-openapi-1331-crds' May 6 22:12:50.496: INFO: stderr: "" May 6 22:12:50.497: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 6 22:12:50.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 explain e2e-test-crd-publish-openapi-1331-crds.metadata' May 6 22:12:50.850: INFO: stderr: "" May 6 22:12:50.850: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 6 22:12:50.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 explain e2e-test-crd-publish-openapi-1331-crds.spec' May 6 22:12:51.205: INFO: stderr: "" May 6 22:12:51.205: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 6 22:12:51.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 explain e2e-test-crd-publish-openapi-1331-crds.spec.bars' May 6 22:12:51.540: INFO: stderr: "" May 6 22:12:51.540: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1331-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 6 22:12:51.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3804 explain e2e-test-crd-publish-openapi-1331-crds.spec.bars2' May 6 22:12:51.882: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:55.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3804" for this suite. • [SLOW TEST:17.129 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":38,"skipped":549,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:50.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 6 22:12:50.242: INFO: The status of Pod annotationupdate5e85b26a-8d22-4768-b987-50dd42d54278 is Pending, waiting for it to be Running (with Ready = true) May 6 22:12:52.246: INFO: The status of Pod annotationupdate5e85b26a-8d22-4768-b987-50dd42d54278 is Pending, waiting for it to be Running (with Ready = true) May 6 22:12:54.247: INFO: The status of Pod annotationupdate5e85b26a-8d22-4768-b987-50dd42d54278 is Running (Ready = true) May 6 22:12:54.768: INFO: Successfully updated pod "annotationupdate5e85b26a-8d22-4768-b987-50dd42d54278" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:56.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8962" for this suite. • [SLOW TEST:6.597 seconds] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":275,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:56.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:56.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7190" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":296,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:55.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-d98882cd-06f6-4620-a308-434e72ca06af STEP: Creating a pod to test consume configMaps May 6 22:12:55.605: INFO: Waiting up to 5m0s for pod "pod-configmaps-d235cb1a-fde0-4338-9c6d-ed83f4ad39b1" in namespace "configmap-5834" to be "Succeeded or Failed" May 6 22:12:55.611: INFO: Pod "pod-configmaps-d235cb1a-fde0-4338-9c6d-ed83f4ad39b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.006953ms May 6 22:12:57.614: INFO: Pod "pod-configmaps-d235cb1a-fde0-4338-9c6d-ed83f4ad39b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00840208s May 6 22:12:59.617: INFO: Pod "pod-configmaps-d235cb1a-fde0-4338-9c6d-ed83f4ad39b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011369621s STEP: Saw pod success May 6 22:12:59.617: INFO: Pod "pod-configmaps-d235cb1a-fde0-4338-9c6d-ed83f4ad39b1" satisfied condition "Succeeded or Failed" May 6 22:12:59.619: INFO: Trying to get logs from node node2 pod pod-configmaps-d235cb1a-fde0-4338-9c6d-ed83f4ad39b1 container agnhost-container: STEP: delete the pod May 6 22:12:59.650: INFO: Waiting for pod pod-configmaps-d235cb1a-fde0-4338-9c6d-ed83f4ad39b1 to disappear May 6 22:12:59.653: INFO: Pod pod-configmaps-d235cb1a-fde0-4338-9c6d-ed83f4ad39b1 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:59.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5834" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":564,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:59.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0506 22:12:59.824951 29 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a cronjob STEP: creating STEP: getting STEP: listing STEP: watching May 6 22:12:59.833: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 6 22:12:59.836: INFO: starting watch STEP: patching STEP: updating May 6 22:12:59.851: INFO: waiting for watch events with expected annotations May 6 22:12:59.851: INFO: saw patched and updated annotations STEP: patching /status STEP: updating /status STEP: get /status STEP: deleting STEP: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:12:59.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-3572" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":40,"skipped":642,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:11:56.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0506 22:11:56.152869 28 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ReplaceConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring the job is replaced with a new one STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:00.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-6908" for this suite. • [SLOW TEST:64.049 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":18,"skipped":224,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:56.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-7d2acff4-afea-4f19-ac2d-0c56e7fae5ae STEP: Creating a pod to test consume configMaps May 6 22:12:56.942: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc708a9c-6e4a-40d9-be9e-f4cd68905640" in namespace "projected-8978" to be "Succeeded or Failed" May 6 22:12:56.946: INFO: Pod "pod-projected-configmaps-cc708a9c-6e4a-40d9-be9e-f4cd68905640": Phase="Pending", Reason="", readiness=false. Elapsed: 3.957247ms May 6 22:12:58.950: INFO: Pod "pod-projected-configmaps-cc708a9c-6e4a-40d9-be9e-f4cd68905640": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008114647s May 6 22:13:00.955: INFO: Pod "pod-projected-configmaps-cc708a9c-6e4a-40d9-be9e-f4cd68905640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013238197s STEP: Saw pod success May 6 22:13:00.955: INFO: Pod "pod-projected-configmaps-cc708a9c-6e4a-40d9-be9e-f4cd68905640" satisfied condition "Succeeded or Failed" May 6 22:13:00.957: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-cc708a9c-6e4a-40d9-be9e-f4cd68905640 container agnhost-container: STEP: delete the pod May 6 22:13:00.970: INFO: Waiting for pod pod-projected-configmaps-cc708a9c-6e4a-40d9-be9e-f4cd68905640 to disappear May 6 22:13:00.972: INFO: Pod pod-projected-configmaps-cc708a9c-6e4a-40d9-be9e-f4cd68905640 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:00.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8978" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":304,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:59.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-24260204-f576-4d27-9af9-b0975ac3b17e STEP: Creating a pod to test consume configMaps May 6 22:12:59.966: INFO: Waiting up to 5m0s for pod "pod-configmaps-06f4dd98-4534-4976-bcdc-a6eb4f673924" in namespace "configmap-3673" to be "Succeeded or Failed" May 6 22:12:59.968: INFO: Pod "pod-configmaps-06f4dd98-4534-4976-bcdc-a6eb4f673924": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336958ms May 6 22:13:01.972: INFO: Pod "pod-configmaps-06f4dd98-4534-4976-bcdc-a6eb4f673924": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006278519s May 6 22:13:03.977: INFO: Pod "pod-configmaps-06f4dd98-4534-4976-bcdc-a6eb4f673924": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010761842s STEP: Saw pod success May 6 22:13:03.977: INFO: Pod "pod-configmaps-06f4dd98-4534-4976-bcdc-a6eb4f673924" satisfied condition "Succeeded or Failed" May 6 22:13:03.980: INFO: Trying to get logs from node node1 pod pod-configmaps-06f4dd98-4534-4976-bcdc-a6eb4f673924 container configmap-volume-test: STEP: delete the pod May 6 22:13:04.063: INFO: Waiting for pod pod-configmaps-06f4dd98-4534-4976-bcdc-a6eb4f673924 to disappear May 6 22:13:04.065: INFO: Pod pod-configmaps-06f4dd98-4534-4976-bcdc-a6eb4f673924 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:04.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3673" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":666,"failed":0} SS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:04.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:04.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6334" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":42,"skipped":668,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:04.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:04.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7118" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":43,"skipped":676,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:00.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test override arguments May 6 22:13:00.218: INFO: Waiting up to 5m0s for pod "client-containers-f99d5b07-ddad-4b9d-941d-0f92fc8be665" in namespace "containers-2901" to be "Succeeded or Failed" May 6 22:13:00.220: INFO: Pod "client-containers-f99d5b07-ddad-4b9d-941d-0f92fc8be665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.205913ms May 6 22:13:02.224: INFO: Pod "client-containers-f99d5b07-ddad-4b9d-941d-0f92fc8be665": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005516796s May 6 22:13:04.227: INFO: Pod "client-containers-f99d5b07-ddad-4b9d-941d-0f92fc8be665": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009253911s STEP: Saw pod success May 6 22:13:04.228: INFO: Pod "client-containers-f99d5b07-ddad-4b9d-941d-0f92fc8be665" satisfied condition "Succeeded or Failed" May 6 22:13:04.229: INFO: Trying to get logs from node node1 pod client-containers-f99d5b07-ddad-4b9d-941d-0f92fc8be665 container agnhost-container: STEP: delete the pod May 6 22:13:04.255: INFO: Waiting for pod client-containers-f99d5b07-ddad-4b9d-941d-0f92fc8be665 to disappear May 6 22:13:04.259: INFO: Pod client-containers-f99d5b07-ddad-4b9d-941d-0f92fc8be665 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:04.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2901" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":228,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:52.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:12:52.621: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-4497 I0506 22:12:52.640161 30 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4497, replica count: 1 I0506 22:12:53.692289 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:12:54.692505 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:12:55.693305 30 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:12:55.802: INFO: Created: latency-svc-mmgwc May 6 22:12:55.807: INFO: Got endpoints: latency-svc-mmgwc [13.291965ms] May 6 22:12:55.814: INFO: Created: latency-svc-95rdp May 6 22:12:55.816: INFO: Created: latency-svc-pnjwh May 6 22:12:55.817: INFO: Got endpoints: latency-svc-95rdp [9.100108ms] May 6 22:12:55.818: INFO: Created: latency-svc-nss9h May 6 22:12:55.819: INFO: Got endpoints: latency-svc-pnjwh [11.772494ms] May 6 22:12:55.820: INFO: Got endpoints: latency-svc-nss9h [12.609211ms] May 6 22:12:55.821: INFO: Created: latency-svc-59wrx May 6 22:12:55.824: INFO: Got endpoints: latency-svc-59wrx [15.850328ms] May 6 22:12:55.824: INFO: Created: latency-svc-x45gn May 6 22:12:55.827: INFO: Got endpoints: latency-svc-x45gn [18.88015ms] May 6 22:12:55.827: INFO: Created: latency-svc-rwq7v May 6 22:12:55.829: INFO: Got endpoints: latency-svc-rwq7v [21.568276ms] May 6 22:12:55.832: INFO: Created: latency-svc-x6j6f May 6 22:12:55.834: INFO: Got endpoints: latency-svc-x6j6f [26.632326ms] May 6 22:12:55.836: INFO: Created: latency-svc-7swnn May 6 22:12:55.838: INFO: Created: latency-svc-lgsm4 May 6 22:12:55.838: INFO: Got endpoints: latency-svc-7swnn [30.484997ms] May 6 22:12:55.841: INFO: Got endpoints: latency-svc-lgsm4 [32.758473ms] May 6 22:12:55.841: INFO: Created: latency-svc-mnb78 May 6 22:12:55.843: INFO: Got endpoints: latency-svc-mnb78 [35.351272ms] May 6 22:12:55.844: INFO: Created: latency-svc-dlbwm May 6 22:12:55.847: INFO: Created: latency-svc-k6tmf May 6 22:12:55.847: INFO: Got endpoints: latency-svc-dlbwm [38.591437ms] May 6 22:12:55.849: INFO: Got endpoints: latency-svc-k6tmf [41.018295ms] May 6 22:12:55.850: INFO: Created: latency-svc-lh6pv May 6 22:12:55.851: INFO: Got endpoints: latency-svc-lh6pv [43.367554ms] May 6 22:12:55.853: INFO: Created: latency-svc-zhd8z May 6 22:12:55.855: INFO: Got endpoints: latency-svc-zhd8z [47.260728ms] May 6 22:12:55.856: INFO: Created: latency-svc-tv52g May 6 22:12:55.858: INFO: Got endpoints: latency-svc-tv52g [49.965324ms] May 6 22:12:55.859: INFO: Created: latency-svc-85fcl May 6 22:12:55.861: INFO: Got endpoints: latency-svc-85fcl [44.77218ms] May 6 22:12:55.862: INFO: Created: latency-svc-6l2l7 May 6 22:12:55.865: INFO: Got endpoints: latency-svc-6l2l7 [45.353215ms] May 6 22:12:55.865: INFO: Created: latency-svc-xpln5 May 6 22:12:55.868: INFO: Got endpoints: latency-svc-xpln5 [47.344664ms] May 6 22:12:55.868: INFO: Created: latency-svc-dzrcb May 6 22:12:55.871: INFO: Got endpoints: latency-svc-dzrcb [47.23939ms] May 6 22:12:55.871: INFO: Created: latency-svc-vrz8v May 6 22:12:55.873: INFO: Got endpoints: latency-svc-vrz8v [46.542554ms] May 6 22:12:55.874: INFO: Created: latency-svc-gshkv May 6 22:12:55.876: INFO: Got endpoints: latency-svc-gshkv [46.747291ms] May 6 22:12:55.878: INFO: Created: latency-svc-jr9rx May 6 22:12:55.880: INFO: Got endpoints: latency-svc-jr9rx [45.76556ms] May 6 22:12:55.880: INFO: Created: latency-svc-dqdsj May 6 22:12:55.883: INFO: Created: latency-svc-ztw4r May 6 22:12:55.883: INFO: Got endpoints: latency-svc-dqdsj [44.414782ms] May 6 22:12:55.885: INFO: Got endpoints: latency-svc-ztw4r [44.292738ms] May 6 22:12:55.886: INFO: Created: latency-svc-msrhs May 6 22:12:55.887: INFO: Got endpoints: latency-svc-msrhs [44.000523ms] May 6 22:12:55.889: INFO: Created: latency-svc-dk5tw May 6 22:12:55.891: INFO: Got endpoints: latency-svc-dk5tw [44.510393ms] May 6 22:12:55.891: INFO: Created: latency-svc-p2jn7 May 6 22:12:55.894: INFO: Got endpoints: latency-svc-p2jn7 [44.900091ms] May 6 22:12:55.897: INFO: Created: latency-svc-7ntg4 May 6 22:12:55.899: INFO: Got endpoints: latency-svc-7ntg4 [47.949259ms] May 6 22:12:55.900: INFO: Created: latency-svc-7dvbv May 6 22:12:55.901: INFO: Created: latency-svc-j8mr8 May 6 22:12:55.902: INFO: Got endpoints: latency-svc-7dvbv [46.945693ms] May 6 22:12:55.903: INFO: Got endpoints: latency-svc-j8mr8 [45.230721ms] May 6 22:12:55.905: INFO: Created: latency-svc-j4jsq May 6 22:12:55.908: INFO: Created: latency-svc-qg8d9 May 6 22:12:55.908: INFO: Got endpoints: latency-svc-j4jsq [46.275879ms] May 6 22:12:55.911: INFO: Created: latency-svc-qslpg May 6 22:12:55.913: INFO: Created: latency-svc-gtfbs May 6 22:12:55.915: INFO: Created: latency-svc-2wr4f May 6 22:12:55.917: INFO: Created: latency-svc-dwvp2 May 6 22:12:55.921: INFO: Created: latency-svc-bh4g6 May 6 22:12:55.923: INFO: Created: latency-svc-zfphc May 6 22:12:55.926: INFO: Created: latency-svc-w2tpt May 6 22:12:55.929: INFO: Created: latency-svc-7djqn May 6 22:12:55.931: INFO: Created: latency-svc-f77r5 May 6 22:12:55.933: INFO: Created: latency-svc-m955p May 6 22:12:55.937: INFO: Created: latency-svc-mzmxg May 6 22:12:55.939: INFO: Created: latency-svc-cbswg May 6 22:12:55.941: INFO: Created: latency-svc-4nchl May 6 22:12:55.946: INFO: Created: latency-svc-6ff6d May 6 22:12:55.955: INFO: Got endpoints: latency-svc-qg8d9 [90.537636ms] May 6 22:12:55.961: INFO: Created: latency-svc-dx6dd May 6 22:12:56.006: INFO: Got endpoints: latency-svc-qslpg [138.15631ms] May 6 22:12:56.010: INFO: Created: latency-svc-9rtd9 May 6 22:12:56.057: INFO: Got endpoints: latency-svc-gtfbs [185.80617ms] May 6 22:12:56.062: INFO: Created: latency-svc-w5smv May 6 22:12:56.106: INFO: Got endpoints: latency-svc-2wr4f [232.892328ms] May 6 22:12:56.112: INFO: Created: latency-svc-4fr8t May 6 22:12:56.156: INFO: Got endpoints: latency-svc-dwvp2 [279.820619ms] May 6 22:12:56.161: INFO: Created: latency-svc-j2hpl May 6 22:12:56.206: INFO: Got endpoints: latency-svc-bh4g6 [325.326732ms] May 6 22:12:56.211: INFO: Created: latency-svc-kpfs2 May 6 22:12:56.256: INFO: Got endpoints: latency-svc-zfphc [373.11994ms] May 6 22:12:56.262: INFO: Created: latency-svc-22g8q May 6 22:12:56.306: INFO: Got endpoints: latency-svc-w2tpt [421.244365ms] May 6 22:12:56.312: INFO: Created: latency-svc-sm84k May 6 22:12:56.355: INFO: Got endpoints: latency-svc-7djqn [467.976483ms] May 6 22:12:56.361: INFO: Created: latency-svc-6lf54 May 6 22:12:56.406: INFO: Got endpoints: latency-svc-f77r5 [515.28188ms] May 6 22:12:56.413: INFO: Created: latency-svc-6cxdg May 6 22:12:56.456: INFO: Got endpoints: latency-svc-m955p [562.454943ms] May 6 22:12:56.461: INFO: Created: latency-svc-fdkff May 6 22:12:56.505: INFO: Got endpoints: latency-svc-mzmxg [605.793094ms] May 6 22:12:56.511: INFO: Created: latency-svc-8s5qk May 6 22:12:56.556: INFO: Got endpoints: latency-svc-cbswg [653.85738ms] May 6 22:12:56.562: INFO: Created: latency-svc-vgbd8 May 6 22:12:56.607: INFO: Got endpoints: latency-svc-4nchl [703.250147ms] May 6 22:12:56.614: INFO: Created: latency-svc-s628x May 6 22:12:56.656: INFO: Got endpoints: latency-svc-6ff6d [747.937838ms] May 6 22:12:56.662: INFO: Created: latency-svc-dmpm6 May 6 22:12:56.707: INFO: Got endpoints: latency-svc-dx6dd [751.262886ms] May 6 22:12:56.713: INFO: Created: latency-svc-4kb9q May 6 22:12:56.756: INFO: Got endpoints: latency-svc-9rtd9 [750.034641ms] May 6 22:12:56.763: INFO: Created: latency-svc-mds49 May 6 22:12:56.805: INFO: Got endpoints: latency-svc-w5smv [748.703311ms] May 6 22:12:56.812: INFO: Created: latency-svc-mc9xf May 6 22:12:56.856: INFO: Got endpoints: latency-svc-4fr8t [749.378542ms] May 6 22:12:56.861: INFO: Created: latency-svc-9g9vv May 6 22:12:56.906: INFO: Got endpoints: latency-svc-j2hpl [750.125547ms] May 6 22:12:56.911: INFO: Created: latency-svc-hcbdd May 6 22:12:56.956: INFO: Got endpoints: latency-svc-kpfs2 [750.157228ms] May 6 22:12:56.964: INFO: Created: latency-svc-xr24m May 6 22:12:57.007: INFO: Got endpoints: latency-svc-22g8q [751.127075ms] May 6 22:12:57.012: INFO: Created: latency-svc-kblhb May 6 22:12:57.056: INFO: Got endpoints: latency-svc-sm84k [749.551622ms] May 6 22:12:57.061: INFO: Created: latency-svc-ldjxv May 6 22:12:57.106: INFO: Got endpoints: latency-svc-6lf54 [750.567628ms] May 6 22:12:57.112: INFO: Created: latency-svc-tjdvk May 6 22:12:57.156: INFO: Got endpoints: latency-svc-6cxdg [749.3062ms] May 6 22:12:57.161: INFO: Created: latency-svc-rn8kc May 6 22:12:57.206: INFO: Got endpoints: latency-svc-fdkff [749.648687ms] May 6 22:12:57.212: INFO: Created: latency-svc-gzpkl May 6 22:12:57.256: INFO: Got endpoints: latency-svc-8s5qk [750.647296ms] May 6 22:12:57.262: INFO: Created: latency-svc-hmnnt May 6 22:12:57.306: INFO: Got endpoints: latency-svc-vgbd8 [749.693438ms] May 6 22:12:57.312: INFO: Created: latency-svc-lmcfw May 6 22:12:57.355: INFO: Got endpoints: latency-svc-s628x [748.711141ms] May 6 22:12:57.360: INFO: Created: latency-svc-g54vm May 6 22:12:57.407: INFO: Got endpoints: latency-svc-dmpm6 [750.803936ms] May 6 22:12:57.413: INFO: Created: latency-svc-dvkms May 6 22:12:57.457: INFO: Got endpoints: latency-svc-4kb9q [749.953401ms] May 6 22:12:57.463: INFO: Created: latency-svc-jwg4m May 6 22:12:57.506: INFO: Got endpoints: latency-svc-mds49 [750.396966ms] May 6 22:12:57.512: INFO: Created: latency-svc-rvmkp May 6 22:12:57.556: INFO: Got endpoints: latency-svc-mc9xf [750.122722ms] May 6 22:12:57.561: INFO: Created: latency-svc-2hprx May 6 22:12:57.608: INFO: Got endpoints: latency-svc-9g9vv [752.664142ms] May 6 22:12:57.614: INFO: Created: latency-svc-nklh8 May 6 22:12:57.661: INFO: Got endpoints: latency-svc-hcbdd [754.954469ms] May 6 22:12:57.667: INFO: Created: latency-svc-r2hdb May 6 22:12:57.705: INFO: Got endpoints: latency-svc-xr24m [749.379047ms] May 6 22:12:57.711: INFO: Created: latency-svc-758v5 May 6 22:12:57.756: INFO: Got endpoints: latency-svc-kblhb [749.130886ms] May 6 22:12:57.764: INFO: Created: latency-svc-trkgg May 6 22:12:57.807: INFO: Got endpoints: latency-svc-ldjxv [750.926838ms] May 6 22:12:57.814: INFO: Created: latency-svc-87vd5 May 6 22:12:57.857: INFO: Got endpoints: latency-svc-tjdvk [750.551573ms] May 6 22:12:57.863: INFO: Created: latency-svc-8jkbq May 6 22:12:57.906: INFO: Got endpoints: latency-svc-rn8kc [750.556447ms] May 6 22:12:57.912: INFO: Created: latency-svc-v5592 May 6 22:12:57.955: INFO: Got endpoints: latency-svc-gzpkl [749.327964ms] May 6 22:12:57.960: INFO: Created: latency-svc-tnvm2 May 6 22:12:58.007: INFO: Got endpoints: latency-svc-hmnnt [750.562239ms] May 6 22:12:58.013: INFO: Created: latency-svc-dnbnd May 6 22:12:58.056: INFO: Got endpoints: latency-svc-lmcfw [750.320716ms] May 6 22:12:58.062: INFO: Created: latency-svc-hrmrr May 6 22:12:58.105: INFO: Got endpoints: latency-svc-g54vm [749.571749ms] May 6 22:12:58.110: INFO: Created: latency-svc-glkx5 May 6 22:12:58.155: INFO: Got endpoints: latency-svc-dvkms [748.36234ms] May 6 22:12:58.161: INFO: Created: latency-svc-8zj8g May 6 22:12:58.206: INFO: Got endpoints: latency-svc-jwg4m [749.823737ms] May 6 22:12:58.212: INFO: Created: latency-svc-dqcgj May 6 22:12:58.256: INFO: Got endpoints: latency-svc-rvmkp [749.627781ms] May 6 22:12:58.261: INFO: Created: latency-svc-v2xm5 May 6 22:12:58.306: INFO: Got endpoints: latency-svc-2hprx [749.811322ms] May 6 22:12:58.311: INFO: Created: latency-svc-2tbsh May 6 22:12:58.356: INFO: Got endpoints: latency-svc-nklh8 [747.59098ms] May 6 22:12:58.362: INFO: Created: latency-svc-m8nkv May 6 22:12:58.406: INFO: Got endpoints: latency-svc-r2hdb [744.516498ms] May 6 22:12:58.411: INFO: Created: latency-svc-jd7gj May 6 22:12:58.456: INFO: Got endpoints: latency-svc-758v5 [750.666787ms] May 6 22:12:58.461: INFO: Created: latency-svc-7kvts May 6 22:12:58.506: INFO: Got endpoints: latency-svc-trkgg [749.914475ms] May 6 22:12:58.512: INFO: Created: latency-svc-pwmfv May 6 22:12:58.556: INFO: Got endpoints: latency-svc-87vd5 [748.645564ms] May 6 22:12:58.560: INFO: Created: latency-svc-v48rq May 6 22:12:58.606: INFO: Got endpoints: latency-svc-8jkbq [749.083068ms] May 6 22:12:58.615: INFO: Created: latency-svc-zhjxc May 6 22:12:58.656: INFO: Got endpoints: latency-svc-v5592 [749.69693ms] May 6 22:12:58.662: INFO: Created: latency-svc-r2btp May 6 22:12:58.706: INFO: Got endpoints: latency-svc-tnvm2 [750.391233ms] May 6 22:12:58.711: INFO: Created: latency-svc-pzqr4 May 6 22:12:58.756: INFO: Got endpoints: latency-svc-dnbnd [749.401476ms] May 6 22:12:58.762: INFO: Created: latency-svc-mgvzl May 6 22:12:58.805: INFO: Got endpoints: latency-svc-hrmrr [748.931041ms] May 6 22:12:58.810: INFO: Created: latency-svc-jx7kv May 6 22:12:58.859: INFO: Got endpoints: latency-svc-glkx5 [753.844782ms] May 6 22:12:58.869: INFO: Created: latency-svc-fnxfb May 6 22:12:58.907: INFO: Got endpoints: latency-svc-8zj8g [751.580738ms] May 6 22:12:58.914: INFO: Created: latency-svc-rvrwb May 6 22:12:58.956: INFO: Got endpoints: latency-svc-dqcgj [749.144788ms] May 6 22:12:58.962: INFO: Created: latency-svc-wr4wh May 6 22:12:59.006: INFO: Got endpoints: latency-svc-v2xm5 [749.897138ms] May 6 22:12:59.011: INFO: Created: latency-svc-mwzhp May 6 22:12:59.056: INFO: Got endpoints: latency-svc-2tbsh [750.81725ms] May 6 22:12:59.062: INFO: Created: latency-svc-m2pp9 May 6 22:12:59.107: INFO: Got endpoints: latency-svc-m8nkv [750.627073ms] May 6 22:12:59.113: INFO: Created: latency-svc-vkfdc May 6 22:12:59.155: INFO: Got endpoints: latency-svc-jd7gj [749.237297ms] May 6 22:12:59.160: INFO: Created: latency-svc-6bssp May 6 22:12:59.205: INFO: Got endpoints: latency-svc-7kvts [749.317043ms] May 6 22:12:59.213: INFO: Created: latency-svc-v6lpv May 6 22:12:59.256: INFO: Got endpoints: latency-svc-pwmfv [750.120264ms] May 6 22:12:59.264: INFO: Created: latency-svc-874m7 May 6 22:12:59.307: INFO: Got endpoints: latency-svc-v48rq [751.288663ms] May 6 22:12:59.312: INFO: Created: latency-svc-zzt2k May 6 22:12:59.356: INFO: Got endpoints: latency-svc-zhjxc [750.409872ms] May 6 22:12:59.362: INFO: Created: latency-svc-5wpj7 May 6 22:12:59.407: INFO: Got endpoints: latency-svc-r2btp [750.876444ms] May 6 22:12:59.413: INFO: Created: latency-svc-dkpz4 May 6 22:12:59.456: INFO: Got endpoints: latency-svc-pzqr4 [750.441695ms] May 6 22:12:59.461: INFO: Created: latency-svc-7xsbb May 6 22:12:59.506: INFO: Got endpoints: latency-svc-mgvzl [750.01021ms] May 6 22:12:59.512: INFO: Created: latency-svc-nzgk4 May 6 22:12:59.556: INFO: Got endpoints: latency-svc-jx7kv [750.810588ms] May 6 22:12:59.564: INFO: Created: latency-svc-h5sr2 May 6 22:12:59.607: INFO: Got endpoints: latency-svc-fnxfb [748.457101ms] May 6 22:12:59.613: INFO: Created: latency-svc-jb6v6 May 6 22:12:59.656: INFO: Got endpoints: latency-svc-rvrwb [749.340674ms] May 6 22:12:59.662: INFO: Created: latency-svc-d8kdz May 6 22:12:59.706: INFO: Got endpoints: latency-svc-wr4wh [749.827574ms] May 6 22:12:59.711: INFO: Created: latency-svc-xvb2l May 6 22:12:59.755: INFO: Got endpoints: latency-svc-mwzhp [749.200467ms] May 6 22:12:59.760: INFO: Created: latency-svc-lsd8n May 6 22:12:59.806: INFO: Got endpoints: latency-svc-m2pp9 [749.653263ms] May 6 22:12:59.812: INFO: Created: latency-svc-ckhmz May 6 22:12:59.855: INFO: Got endpoints: latency-svc-vkfdc [748.446028ms] May 6 22:12:59.861: INFO: Created: latency-svc-2zs6c May 6 22:12:59.957: INFO: Got endpoints: latency-svc-6bssp [801.51903ms] May 6 22:12:59.964: INFO: Created: latency-svc-mf6wb May 6 22:13:00.006: INFO: Got endpoints: latency-svc-v6lpv [800.920571ms] May 6 22:13:00.012: INFO: Created: latency-svc-dxc8h May 6 22:13:00.056: INFO: Got endpoints: latency-svc-874m7 [799.081581ms] May 6 22:13:00.061: INFO: Created: latency-svc-9ddnc May 6 22:13:00.106: INFO: Got endpoints: latency-svc-zzt2k [799.27154ms] May 6 22:13:00.112: INFO: Created: latency-svc-j64mc May 6 22:13:00.156: INFO: Got endpoints: latency-svc-5wpj7 [799.425895ms] May 6 22:13:00.162: INFO: Created: latency-svc-6qxpp May 6 22:13:00.205: INFO: Got endpoints: latency-svc-dkpz4 [797.687932ms] May 6 22:13:00.210: INFO: Created: latency-svc-qhtxq May 6 22:13:00.257: INFO: Got endpoints: latency-svc-7xsbb [800.22336ms] May 6 22:13:00.270: INFO: Created: latency-svc-54nsk May 6 22:13:00.307: INFO: Got endpoints: latency-svc-nzgk4 [800.822746ms] May 6 22:13:00.313: INFO: Created: latency-svc-nsfgt May 6 22:13:00.359: INFO: Got endpoints: latency-svc-h5sr2 [803.227674ms] May 6 22:13:00.371: INFO: Created: latency-svc-98bnw May 6 22:13:00.406: INFO: Got endpoints: latency-svc-jb6v6 [798.532385ms] May 6 22:13:00.411: INFO: Created: latency-svc-f2gz9 May 6 22:13:00.457: INFO: Got endpoints: latency-svc-d8kdz [800.53347ms] May 6 22:13:00.463: INFO: Created: latency-svc-zdbp8 May 6 22:13:00.506: INFO: Got endpoints: latency-svc-xvb2l [800.100143ms] May 6 22:13:00.511: INFO: Created: latency-svc-8ppb8 May 6 22:13:00.556: INFO: Got endpoints: latency-svc-lsd8n [800.239219ms] May 6 22:13:00.560: INFO: Created: latency-svc-x92lg May 6 22:13:00.606: INFO: Got endpoints: latency-svc-ckhmz [799.855955ms] May 6 22:13:00.612: INFO: Created: latency-svc-j8wpv May 6 22:13:00.656: INFO: Got endpoints: latency-svc-2zs6c [800.559905ms] May 6 22:13:00.663: INFO: Created: latency-svc-k7ld2 May 6 22:13:00.706: INFO: Got endpoints: latency-svc-mf6wb [749.235376ms] May 6 22:13:00.711: INFO: Created: latency-svc-jlflp May 6 22:13:00.755: INFO: Got endpoints: latency-svc-dxc8h [748.707197ms] May 6 22:13:00.762: INFO: Created: latency-svc-cqs6t May 6 22:13:00.806: INFO: Got endpoints: latency-svc-9ddnc [750.252153ms] May 6 22:13:00.811: INFO: Created: latency-svc-psfzd May 6 22:13:00.855: INFO: Got endpoints: latency-svc-j64mc [748.354207ms] May 6 22:13:00.860: INFO: Created: latency-svc-6drpb May 6 22:13:00.906: INFO: Got endpoints: latency-svc-6qxpp [750.214369ms] May 6 22:13:00.913: INFO: Created: latency-svc-4z67x May 6 22:13:00.955: INFO: Got endpoints: latency-svc-qhtxq [750.501089ms] May 6 22:13:00.960: INFO: Created: latency-svc-4qnmx May 6 22:13:01.055: INFO: Got endpoints: latency-svc-54nsk [798.717383ms] May 6 22:13:01.060: INFO: Created: latency-svc-zkgvj May 6 22:13:01.106: INFO: Got endpoints: latency-svc-nsfgt [799.168305ms] May 6 22:13:01.113: INFO: Created: latency-svc-q9gf4 May 6 22:13:01.157: INFO: Got endpoints: latency-svc-98bnw [797.083213ms] May 6 22:13:01.164: INFO: Created: latency-svc-pbgjv May 6 22:13:01.207: INFO: Got endpoints: latency-svc-f2gz9 [800.665149ms] May 6 22:13:01.212: INFO: Created: latency-svc-q5tfr May 6 22:13:01.256: INFO: Got endpoints: latency-svc-zdbp8 [799.066885ms] May 6 22:13:01.261: INFO: Created: latency-svc-fhdmh May 6 22:13:01.306: INFO: Got endpoints: latency-svc-8ppb8 [799.792577ms] May 6 22:13:01.311: INFO: Created: latency-svc-vc5vr May 6 22:13:01.358: INFO: Got endpoints: latency-svc-x92lg [802.643819ms] May 6 22:13:01.363: INFO: Created: latency-svc-sp7tt May 6 22:13:01.406: INFO: Got endpoints: latency-svc-j8wpv [799.621935ms] May 6 22:13:01.414: INFO: Created: latency-svc-c45x5 May 6 22:13:01.456: INFO: Got endpoints: latency-svc-k7ld2 [800.499453ms] May 6 22:13:01.462: INFO: Created: latency-svc-7dnpl May 6 22:13:01.506: INFO: Got endpoints: latency-svc-jlflp [800.256228ms] May 6 22:13:01.511: INFO: Created: latency-svc-4zw4n May 6 22:13:01.556: INFO: Got endpoints: latency-svc-cqs6t [801.174013ms] May 6 22:13:01.563: INFO: Created: latency-svc-w2cmq May 6 22:13:01.606: INFO: Got endpoints: latency-svc-psfzd [800.074211ms] May 6 22:13:01.612: INFO: Created: latency-svc-vp9tx May 6 22:13:01.657: INFO: Got endpoints: latency-svc-6drpb [802.048413ms] May 6 22:13:01.663: INFO: Created: latency-svc-2l58t May 6 22:13:01.706: INFO: Got endpoints: latency-svc-4z67x [800.03268ms] May 6 22:13:01.713: INFO: Created: latency-svc-6gwxb May 6 22:13:01.757: INFO: Got endpoints: latency-svc-4qnmx [801.475249ms] May 6 22:13:01.762: INFO: Created: latency-svc-pnwcb May 6 22:13:01.860: INFO: Got endpoints: latency-svc-zkgvj [804.233591ms] May 6 22:13:01.874: INFO: Created: latency-svc-5zt6k May 6 22:13:01.905: INFO: Got endpoints: latency-svc-q9gf4 [799.171483ms] May 6 22:13:01.912: INFO: Created: latency-svc-wlvq7 May 6 22:13:01.956: INFO: Got endpoints: latency-svc-pbgjv [798.912773ms] May 6 22:13:01.961: INFO: Created: latency-svc-ckk5d May 6 22:13:02.006: INFO: Got endpoints: latency-svc-q5tfr [799.024701ms] May 6 22:13:02.011: INFO: Created: latency-svc-rhg52 May 6 22:13:02.057: INFO: Got endpoints: latency-svc-fhdmh [801.430945ms] May 6 22:13:02.063: INFO: Created: latency-svc-9dh6q May 6 22:13:02.106: INFO: Got endpoints: latency-svc-vc5vr [800.272777ms] May 6 22:13:02.113: INFO: Created: latency-svc-4t7sr May 6 22:13:02.156: INFO: Got endpoints: latency-svc-sp7tt [797.887588ms] May 6 22:13:02.161: INFO: Created: latency-svc-7n9s4 May 6 22:13:02.206: INFO: Got endpoints: latency-svc-c45x5 [800.641598ms] May 6 22:13:02.212: INFO: Created: latency-svc-trbsp May 6 22:13:02.257: INFO: Got endpoints: latency-svc-7dnpl [800.097692ms] May 6 22:13:02.263: INFO: Created: latency-svc-kc7j7 May 6 22:13:02.306: INFO: Got endpoints: latency-svc-4zw4n [799.348319ms] May 6 22:13:02.311: INFO: Created: latency-svc-nfkkc May 6 22:13:02.356: INFO: Got endpoints: latency-svc-w2cmq [799.860584ms] May 6 22:13:02.363: INFO: Created: latency-svc-2g6jj May 6 22:13:02.406: INFO: Got endpoints: latency-svc-vp9tx [800.132785ms] May 6 22:13:02.412: INFO: Created: latency-svc-8fgxs May 6 22:13:02.460: INFO: Got endpoints: latency-svc-2l58t [803.266539ms] May 6 22:13:02.466: INFO: Created: latency-svc-9jnss May 6 22:13:02.507: INFO: Got endpoints: latency-svc-6gwxb [800.822906ms] May 6 22:13:02.514: INFO: Created: latency-svc-zpbgw May 6 22:13:02.557: INFO: Got endpoints: latency-svc-pnwcb [799.770961ms] May 6 22:13:02.567: INFO: Created: latency-svc-8stfk May 6 22:13:02.605: INFO: Got endpoints: latency-svc-5zt6k [745.442653ms] May 6 22:13:02.612: INFO: Created: latency-svc-md85d May 6 22:13:02.656: INFO: Got endpoints: latency-svc-wlvq7 [750.737919ms] May 6 22:13:02.662: INFO: Created: latency-svc-8h8df May 6 22:13:02.706: INFO: Got endpoints: latency-svc-ckk5d [750.027391ms] May 6 22:13:02.711: INFO: Created: latency-svc-gnw2p May 6 22:13:02.756: INFO: Got endpoints: latency-svc-rhg52 [749.747353ms] May 6 22:13:02.760: INFO: Created: latency-svc-bms9w May 6 22:13:02.806: INFO: Got endpoints: latency-svc-9dh6q [748.700926ms] May 6 22:13:02.812: INFO: Created: latency-svc-hmxbf May 6 22:13:02.856: INFO: Got endpoints: latency-svc-4t7sr [749.580482ms] May 6 22:13:02.862: INFO: Created: latency-svc-htx4d May 6 22:13:02.906: INFO: Got endpoints: latency-svc-7n9s4 [749.517748ms] May 6 22:13:02.911: INFO: Created: latency-svc-jk4j6 May 6 22:13:02.957: INFO: Got endpoints: latency-svc-trbsp [750.722976ms] May 6 22:13:02.963: INFO: Created: latency-svc-99hs8 May 6 22:13:03.005: INFO: Got endpoints: latency-svc-kc7j7 [748.725719ms] May 6 22:13:03.014: INFO: Created: latency-svc-bk4hz May 6 22:13:03.057: INFO: Got endpoints: latency-svc-nfkkc [750.911082ms] May 6 22:13:03.063: INFO: Created: latency-svc-7gpds May 6 22:13:03.106: INFO: Got endpoints: latency-svc-2g6jj [750.05823ms] May 6 22:13:03.113: INFO: Created: latency-svc-bms9j May 6 22:13:03.156: INFO: Got endpoints: latency-svc-8fgxs [749.88355ms] May 6 22:13:03.162: INFO: Created: latency-svc-j6k4x May 6 22:13:03.206: INFO: Got endpoints: latency-svc-9jnss [746.118329ms] May 6 22:13:03.211: INFO: Created: latency-svc-vsdgm May 6 22:13:03.256: INFO: Got endpoints: latency-svc-zpbgw [749.15542ms] May 6 22:13:03.263: INFO: Created: latency-svc-wpqwx May 6 22:13:03.306: INFO: Got endpoints: latency-svc-8stfk [749.184035ms] May 6 22:13:03.312: INFO: Created: latency-svc-bb4s5 May 6 22:13:03.356: INFO: Got endpoints: latency-svc-md85d [751.141997ms] May 6 22:13:03.362: INFO: Created: latency-svc-wwk56 May 6 22:13:03.407: INFO: Got endpoints: latency-svc-8h8df [750.38685ms] May 6 22:13:03.413: INFO: Created: latency-svc-42pj8 May 6 22:13:03.456: INFO: Got endpoints: latency-svc-gnw2p [750.395501ms] May 6 22:13:03.461: INFO: Created: latency-svc-7k6qz May 6 22:13:03.505: INFO: Got endpoints: latency-svc-bms9w [749.900856ms] May 6 22:13:03.511: INFO: Created: latency-svc-8ph2k May 6 22:13:03.555: INFO: Got endpoints: latency-svc-hmxbf [749.531155ms] May 6 22:13:03.562: INFO: Created: latency-svc-d69pw May 6 22:13:03.606: INFO: Got endpoints: latency-svc-htx4d [750.191281ms] May 6 22:13:03.613: INFO: Created: latency-svc-76hmw May 6 22:13:03.657: INFO: Got endpoints: latency-svc-jk4j6 [751.29384ms] May 6 22:13:03.662: INFO: Created: latency-svc-49g98 May 6 22:13:03.707: INFO: Got endpoints: latency-svc-99hs8 [749.517858ms] May 6 22:13:03.712: INFO: Created: latency-svc-dfbjp May 6 22:13:03.756: INFO: Got endpoints: latency-svc-bk4hz [750.279438ms] May 6 22:13:03.761: INFO: Created: latency-svc-9dbhs May 6 22:13:03.806: INFO: Got endpoints: latency-svc-7gpds [749.284203ms] May 6 22:13:03.856: INFO: Got endpoints: latency-svc-bms9j [749.932565ms] May 6 22:13:03.907: INFO: Got endpoints: latency-svc-j6k4x [750.395693ms] May 6 22:13:03.956: INFO: Got endpoints: latency-svc-vsdgm [750.002892ms] May 6 22:13:04.007: INFO: Got endpoints: latency-svc-wpqwx [750.649948ms] May 6 22:13:04.056: INFO: Got endpoints: latency-svc-bb4s5 [749.835878ms] May 6 22:13:04.106: INFO: Got endpoints: latency-svc-wwk56 [749.447952ms] May 6 22:13:04.155: INFO: Got endpoints: latency-svc-42pj8 [748.50863ms] May 6 22:13:04.205: INFO: Got endpoints: latency-svc-7k6qz [749.224921ms] May 6 22:13:04.259: INFO: Got endpoints: latency-svc-8ph2k [753.244784ms] May 6 22:13:04.305: INFO: Got endpoints: latency-svc-d69pw [749.460793ms] May 6 22:13:04.355: INFO: Got endpoints: latency-svc-76hmw [749.598496ms] May 6 22:13:04.405: INFO: Got endpoints: latency-svc-49g98 [748.382141ms] May 6 22:13:04.457: INFO: Got endpoints: latency-svc-dfbjp [750.092732ms] May 6 22:13:04.505: INFO: Got endpoints: latency-svc-9dbhs [749.517262ms] May 6 22:13:04.505: INFO: Latencies: [9.100108ms 11.772494ms 12.609211ms 15.850328ms 18.88015ms 21.568276ms 26.632326ms 30.484997ms 32.758473ms 35.351272ms 38.591437ms 41.018295ms 43.367554ms 44.000523ms 44.292738ms 44.414782ms 44.510393ms 44.77218ms 44.900091ms 45.230721ms 45.353215ms 45.76556ms 46.275879ms 46.542554ms 46.747291ms 46.945693ms 47.23939ms 47.260728ms 47.344664ms 47.949259ms 49.965324ms 90.537636ms 138.15631ms 185.80617ms 232.892328ms 279.820619ms 325.326732ms 373.11994ms 421.244365ms 467.976483ms 515.28188ms 562.454943ms 605.793094ms 653.85738ms 703.250147ms 744.516498ms 745.442653ms 746.118329ms 747.59098ms 747.937838ms 748.354207ms 748.36234ms 748.382141ms 748.446028ms 748.457101ms 748.50863ms 748.645564ms 748.700926ms 748.703311ms 748.707197ms 748.711141ms 748.725719ms 748.931041ms 749.083068ms 749.130886ms 749.144788ms 749.15542ms 749.184035ms 749.200467ms 749.224921ms 749.235376ms 749.237297ms 749.284203ms 749.3062ms 749.317043ms 749.327964ms 749.340674ms 749.378542ms 749.379047ms 749.401476ms 749.447952ms 749.460793ms 749.517262ms 749.517748ms 749.517858ms 749.531155ms 749.551622ms 749.571749ms 749.580482ms 749.598496ms 749.627781ms 749.648687ms 749.653263ms 749.693438ms 749.69693ms 749.747353ms 749.811322ms 749.823737ms 749.827574ms 749.835878ms 749.88355ms 749.897138ms 749.900856ms 749.914475ms 749.932565ms 749.953401ms 750.002892ms 750.01021ms 750.027391ms 750.034641ms 750.05823ms 750.092732ms 750.120264ms 750.122722ms 750.125547ms 750.157228ms 750.191281ms 750.214369ms 750.252153ms 750.279438ms 750.320716ms 750.38685ms 750.391233ms 750.395501ms 750.395693ms 750.396966ms 750.409872ms 750.441695ms 750.501089ms 750.551573ms 750.556447ms 750.562239ms 750.567628ms 750.627073ms 750.647296ms 750.649948ms 750.666787ms 750.722976ms 750.737919ms 750.803936ms 750.810588ms 750.81725ms 750.876444ms 750.911082ms 750.926838ms 751.127075ms 751.141997ms 751.262886ms 751.288663ms 751.29384ms 751.580738ms 752.664142ms 753.244784ms 753.844782ms 754.954469ms 797.083213ms 797.687932ms 797.887588ms 798.532385ms 798.717383ms 798.912773ms 799.024701ms 799.066885ms 799.081581ms 799.168305ms 799.171483ms 799.27154ms 799.348319ms 799.425895ms 799.621935ms 799.770961ms 799.792577ms 799.855955ms 799.860584ms 800.03268ms 800.074211ms 800.097692ms 800.100143ms 800.132785ms 800.22336ms 800.239219ms 800.256228ms 800.272777ms 800.499453ms 800.53347ms 800.559905ms 800.641598ms 800.665149ms 800.822746ms 800.822906ms 800.920571ms 801.174013ms 801.430945ms 801.475249ms 801.51903ms 802.048413ms 802.643819ms 803.227674ms 803.266539ms 804.233591ms] May 6 22:13:04.506: INFO: 50 %ile: 749.88355ms May 6 22:13:04.506: INFO: 90 %ile: 800.239219ms May 6 22:13:04.506: INFO: 99 %ile: 803.266539ms May 6 22:13:04.506: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:04.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4497" for this suite. • [SLOW TEST:11.920 seconds] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":20,"skipped":397,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:04.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-735df856-7bb0-4628-bcf9-33d398f28be8 STEP: Creating a pod to test consume secrets May 6 22:13:04.567: INFO: Waiting up to 5m0s for pod "pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98" in namespace "secrets-8117" to be "Succeeded or Failed" May 6 22:13:04.569: INFO: Pod "pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025157ms May 6 22:13:06.573: INFO: Pod "pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005965429s May 6 22:13:08.576: INFO: Pod "pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009688124s May 6 22:13:10.579: INFO: Pod "pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.011964018s STEP: Saw pod success May 6 22:13:10.579: INFO: Pod "pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98" satisfied condition "Succeeded or Failed" May 6 22:13:10.581: INFO: Trying to get logs from node node2 pod pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98 container secret-env-test: STEP: delete the pod May 6 22:13:10.643: INFO: Waiting for pod pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98 to disappear May 6 22:13:10.649: INFO: Pod pod-secrets-8f821b0d-8ed0-4b2b-b57d-1577e9e99b98 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:10.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8117" for this suite. • [SLOW TEST:6.124 seconds] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":403,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:04.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:13:04.517: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:13:06.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471984, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471984, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471984, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471984, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:13:09.539: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:13:09.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2823-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:17.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4591" for this suite. STEP: Destroying namespace "webhook-4591-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.478 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":44,"skipped":681,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:04.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 6 22:13:04.817: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 6 22:13:06.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471984, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471984, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471984, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471984, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:13:09.840: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:13:09.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:17.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9951" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.729 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":20,"skipped":232,"failed":0} SSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:10.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:13:10.697: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:18.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9678" for this suite. • [SLOW TEST:7.644 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":22,"skipped":411,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":28,"skipped":457,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:28.093: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-5826 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating stateful set ss in namespace statefulset-5826 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5826 May 6 22:12:28.127: INFO: Found 0 stateful pods, waiting for 1 May 6 22:12:38.129: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 6 22:12:38.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5826 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:12:38.444: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:12:38.444: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:12:38.444: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:12:38.446: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 22:12:48.450: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 22:12:48.450: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:12:48.463: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:12:48.463: INFO: ss-0 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC }] May 6 22:12:48.463: INFO: May 6 22:12:48.463: INFO: StatefulSet ss has not reached scale 3, at 1 May 6 22:12:49.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996503585s May 6 22:12:50.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992377655s May 6 22:12:51.474: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988945998s May 6 22:12:52.513: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985382267s May 6 22:12:53.517: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.946054108s May 6 22:12:54.521: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.942279457s May 6 22:12:55.524: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.93838041s May 6 22:12:56.528: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.934417017s May 6 22:12:57.538: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.894175ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5826 May 6 22:12:58.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5826 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:12:58.803: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 6 22:12:58.803: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:12:58.803: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:12:58.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5826 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:12:59.050: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 6 22:12:59.050: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:12:59.050: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:12:59.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5826 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:12:59.304: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" May 6 22:12:59.304: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:12:59.304: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:12:59.308: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:12:59.308: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:12:59.308: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 6 22:12:59.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5826 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:12:59.566: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:12:59.566: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:12:59.566: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:12:59.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5826 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:12:59.811: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:12:59.811: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:12:59.812: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:12:59.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-5826 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:13:00.286: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:13:00.286: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:13:00.286: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:13:00.286: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:13:00.289: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 6 22:13:10.295: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 22:13:10.295: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 22:13:10.295: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 22:13:10.304: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:13:10.304: INFO: ss-0 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC }] May 6 22:13:10.304: INFO: ss-1 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:10.304: INFO: ss-2 node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:10.304: INFO: May 6 22:13:10.304: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 22:13:11.309: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:13:11.309: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC }] May 6 22:13:11.309: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:11.309: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:11.309: INFO: May 6 22:13:11.309: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 22:13:12.314: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:13:12.314: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC }] May 6 22:13:12.314: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:12.314: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:12.314: INFO: May 6 22:13:12.314: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 22:13:13.318: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:13:13.319: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC }] May 6 22:13:13.319: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:13.319: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:13.319: INFO: May 6 22:13:13.319: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 22:13:14.325: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:13:14.325: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC }] May 6 22:13:14.325: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:14.325: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:14.325: INFO: May 6 22:13:14.325: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 22:13:15.329: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:13:15.329: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC }] May 6 22:13:15.329: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:15.329: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:15.329: INFO: May 6 22:13:15.329: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 22:13:16.335: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:13:16.335: INFO: ss-0 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:28 +0000 UTC }] May 6 22:13:16.335: INFO: ss-1 node2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:16.335: INFO: ss-2 node1 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:13:00 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:12:48 +0000 UTC }] May 6 22:13:16.335: INFO: May 6 22:13:16.335: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 22:13:17.338: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.965359271s May 6 22:13:18.344: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.961712783s May 6 22:13:19.349: INFO: Verifying statefulset ss doesn't scale past 0 for another 954.714035ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5826 May 6 22:13:20.352: INFO: Scaling statefulset ss to 0 May 6 22:13:20.360: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 6 22:13:20.362: INFO: Deleting all statefulset in ns statefulset-5826 May 6 22:13:20.364: INFO: Scaling statefulset ss to 0 May 6 22:13:20.372: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:13:20.373: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:20.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5826" for this suite. • [SLOW TEST:52.296 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":29,"skipped":457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:18.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:13:18.388: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c0bb14b4-dfc4-408b-97ed-27dec2a5f2aa" in namespace "downward-api-6796" to be "Succeeded or Failed" May 6 22:13:18.389: INFO: Pod "downwardapi-volume-c0bb14b4-dfc4-408b-97ed-27dec2a5f2aa": Phase="Pending", Reason="", readiness=false. Elapsed: 1.885716ms May 6 22:13:20.394: INFO: Pod "downwardapi-volume-c0bb14b4-dfc4-408b-97ed-27dec2a5f2aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006427759s May 6 22:13:22.398: INFO: Pod "downwardapi-volume-c0bb14b4-dfc4-408b-97ed-27dec2a5f2aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010744265s STEP: Saw pod success May 6 22:13:22.398: INFO: Pod "downwardapi-volume-c0bb14b4-dfc4-408b-97ed-27dec2a5f2aa" satisfied condition "Succeeded or Failed" May 6 22:13:22.400: INFO: Trying to get logs from node node1 pod downwardapi-volume-c0bb14b4-dfc4-408b-97ed-27dec2a5f2aa container client-container: STEP: delete the pod May 6 22:13:22.415: INFO: Waiting for pod downwardapi-volume-c0bb14b4-dfc4-408b-97ed-27dec2a5f2aa to disappear May 6 22:13:22.417: INFO: Pod downwardapi-volume-c0bb14b4-dfc4-408b-97ed-27dec2a5f2aa no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:22.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6796" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":425,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:01.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-downwardapi-nl74 STEP: Creating a pod to test atomic-volume-subpath May 6 22:13:01.039: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-nl74" in namespace "subpath-2315" to be "Succeeded or Failed" May 6 22:13:01.041: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.388331ms May 6 22:13:03.045: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005859051s May 6 22:13:05.048: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 4.009332913s May 6 22:13:07.052: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 6.012896238s May 6 22:13:09.056: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 8.01709979s May 6 22:13:11.060: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 10.021485678s May 6 22:13:13.064: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 12.024609292s May 6 22:13:15.067: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 14.028379999s May 6 22:13:17.073: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 16.034230745s May 6 22:13:19.077: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 18.038433176s May 6 22:13:21.083: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 20.04434162s May 6 22:13:23.086: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Running", Reason="", readiness=true. Elapsed: 22.047467484s May 6 22:13:25.090: INFO: Pod "pod-subpath-test-downwardapi-nl74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.051425938s STEP: Saw pod success May 6 22:13:25.090: INFO: Pod "pod-subpath-test-downwardapi-nl74" satisfied condition "Succeeded or Failed" May 6 22:13:25.093: INFO: Trying to get logs from node node2 pod pod-subpath-test-downwardapi-nl74 container test-container-subpath-downwardapi-nl74: STEP: delete the pod May 6 22:13:25.109: INFO: Waiting for pod pod-subpath-test-downwardapi-nl74 to disappear May 6 22:13:25.111: INFO: Pod pod-subpath-test-downwardapi-nl74 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-nl74 May 6 22:13:25.111: INFO: Deleting pod "pod-subpath-test-downwardapi-nl74" in namespace "subpath-2315" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:25.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2315" for this suite. • [SLOW TEST:24.123 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":17,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:10:33.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod with failed condition STEP: updating the pod May 6 22:12:34.148: INFO: Successfully updated pod "var-expansion-9b38f33d-4b21-49a6-923a-b1f5ceb02b4d" STEP: waiting for pod running STEP: deleting the pod gracefully May 6 22:12:40.155: INFO: Deleting pod "var-expansion-9b38f33d-4b21-49a6-923a-b1f5ceb02b4d" in namespace "var-expansion-7638" May 6 22:12:40.160: INFO: Wait up to 5m0s for pod "var-expansion-9b38f33d-4b21-49a6-923a-b1f5ceb02b4d" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:28.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7638" for this suite. • [SLOW TEST:174.578 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":16,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:22.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 6 22:13:22.511: INFO: The status of Pod labelsupdate82bddf5a-4233-4ef2-a5f7-7f72f02f4c81 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:24.514: INFO: The status of Pod labelsupdate82bddf5a-4233-4ef2-a5f7-7f72f02f4c81 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:26.514: INFO: The status of Pod labelsupdate82bddf5a-4233-4ef2-a5f7-7f72f02f4c81 is Running (Ready = true) May 6 22:13:27.032: INFO: Successfully updated pod "labelsupdate82bddf5a-4233-4ef2-a5f7-7f72f02f4c81" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:29.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3021" for this suite. • [SLOW TEST:6.580 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":451,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:20.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:13:20.455: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 6 22:13:20.460: INFO: Pod name sample-pod: Found 0 pods out of 1 May 6 22:13:25.463: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 22:13:25.463: INFO: Creating deployment "test-rolling-update-deployment" May 6 22:13:25.467: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 6 22:13:25.471: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 6 22:13:27.478: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 6 22:13:27.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472005, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472005, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472005, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472005, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-585b757574\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:13:29.484: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 6 22:13:29.491: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6757 01f1d671-1be6-42b6-8ca2-df8b68dd0366 43828 1 2022-05-06 22:13:25 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-05-06 22:13:25 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-06 22:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000cd3e18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-06 22:13:25 +0000 UTC,LastTransitionTime:2022-05-06 22:13:25 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-585b757574" has successfully progressed.,LastUpdateTime:2022-05-06 22:13:27 +0000 UTC,LastTransitionTime:2022-05-06 22:13:25 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 22:13:29.494: INFO: New ReplicaSet "test-rolling-update-deployment-585b757574" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-585b757574 deployment-6757 7b767e52-006c-4daa-b5a4-e6c6d0948b11 43817 1 2022-05-06 22:13:25 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 01f1d671-1be6-42b6-8ca2-df8b68dd0366 0xc00315e677 0xc00315e678}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01f1d671-1be6-42b6-8ca2-df8b68dd0366\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 585b757574,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00315e788 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 22:13:29.494: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 6 22:13:29.494: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6757 8e84e010-b992-4e83-bc71-b8b8f1331dcc 43827 2 2022-05-06 22:13:20 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 01f1d671-1be6-42b6-8ca2-df8b68dd0366 0xc00315e557 0xc00315e558}] [] [{e2e.test Update apps/v1 2022-05-06 22:13:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-06 22:13:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"01f1d671-1be6-42b6-8ca2-df8b68dd0366\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00315e608 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 22:13:29.497: INFO: Pod "test-rolling-update-deployment-585b757574-gspvn" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-585b757574-gspvn test-rolling-update-deployment-585b757574- deployment-6757 96593574-593e-4842-923f-41bc720b3450 43816 0 2022-05-06 22:13:25 +0000 UTC map[name:sample-pod pod-template-hash:585b757574] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.85" ], "mac": "da:18:da:21:b2:a3", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.85" ], "mac": "da:18:da:21:b2:a3", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rolling-update-deployment-585b757574 7b767e52-006c-4daa-b5a4-e6c6d0948b11 0xc00315efbf 0xc00315efd0}] [] [{kube-controller-manager Update v1 2022-05-06 22:13:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b767e52-006c-4daa-b5a4-e6c6d0948b11\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:13:26 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:13:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.85\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ng7gs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ng7gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:13:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:13:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:13:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:13:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.85,StartTime:2022-05-06 22:13:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:13:27 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://6e0552abe394b44d068fa1ad405852969e29cb8ea66a60af8d625a27bff4179f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:29.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6757" for this suite. • [SLOW TEST:9.071 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":30,"skipped":481,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:18.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:13:18.428: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:13:20.437: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471998, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471998, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787471998, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:13:23.449: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:13:23.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:31.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9835" for this suite. STEP: Destroying namespace "webhook-9835-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.552 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":21,"skipped":241,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:29.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-00934ec0-c0dc-4d85-bf87-19ee826fdcb0 STEP: Creating a pod to test consume configMaps May 6 22:13:29.570: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8" in namespace "projected-692" to be "Succeeded or Failed" May 6 22:13:29.572: INFO: Pod "pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2397ms May 6 22:13:31.575: INFO: Pod "pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004712993s May 6 22:13:33.578: INFO: Pod "pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008261668s May 6 22:13:35.582: INFO: Pod "pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01152508s STEP: Saw pod success May 6 22:13:35.582: INFO: Pod "pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8" satisfied condition "Succeeded or Failed" May 6 22:13:35.583: INFO: Trying to get logs from node node2 pod pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8 container agnhost-container: STEP: delete the pod May 6 22:13:35.598: INFO: Waiting for pod pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8 to disappear May 6 22:13:35.601: INFO: Pod pod-projected-configmaps-491a8427-c562-48bb-84f8-51704c25c9b8 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:35.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-692" for this suite. • [SLOW TEST:6.074 seconds] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":497,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:25.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:36.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2426" for this suite. • [SLOW TEST:11.067 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":18,"skipped":333,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:35.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-fd012a6b-93c8-414f-ac80-fea2d3159b04 STEP: Creating a pod to test consume secrets May 6 22:13:35.725: INFO: Waiting up to 5m0s for pod "pod-secrets-46fbd7b0-c36a-4e85-9b9b-9ff07ed4508d" in namespace "secrets-561" to be "Succeeded or Failed" May 6 22:13:35.728: INFO: Pod "pod-secrets-46fbd7b0-c36a-4e85-9b9b-9ff07ed4508d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.786303ms May 6 22:13:37.732: INFO: Pod "pod-secrets-46fbd7b0-c36a-4e85-9b9b-9ff07ed4508d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006909064s May 6 22:13:39.736: INFO: Pod "pod-secrets-46fbd7b0-c36a-4e85-9b9b-9ff07ed4508d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011448978s STEP: Saw pod success May 6 22:13:39.736: INFO: Pod "pod-secrets-46fbd7b0-c36a-4e85-9b9b-9ff07ed4508d" satisfied condition "Succeeded or Failed" May 6 22:13:39.739: INFO: Trying to get logs from node node1 pod pod-secrets-46fbd7b0-c36a-4e85-9b9b-9ff07ed4508d container secret-volume-test: STEP: delete the pod May 6 22:13:39.752: INFO: Waiting for pod pod-secrets-46fbd7b0-c36a-4e85-9b9b-9ff07ed4508d to disappear May 6 22:13:39.754: INFO: Pod pod-secrets-46fbd7b0-c36a-4e85-9b9b-9ff07ed4508d no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:39.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-561" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":536,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:31.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 6 22:13:31.654: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:39.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5926" for this suite. • [SLOW TEST:8.189 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":22,"skipped":259,"failed":0} SSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:36.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 22:13:36.302: INFO: Waiting up to 5m0s for pod "pod-fdc950df-e0b2-4a6c-86b2-9e9e00001cc3" in namespace "emptydir-494" to be "Succeeded or Failed" May 6 22:13:36.307: INFO: Pod "pod-fdc950df-e0b2-4a6c-86b2-9e9e00001cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.401984ms May 6 22:13:38.312: INFO: Pod "pod-fdc950df-e0b2-4a6c-86b2-9e9e00001cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010386101s May 6 22:13:40.318: INFO: Pod "pod-fdc950df-e0b2-4a6c-86b2-9e9e00001cc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015964314s STEP: Saw pod success May 6 22:13:40.318: INFO: Pod "pod-fdc950df-e0b2-4a6c-86b2-9e9e00001cc3" satisfied condition "Succeeded or Failed" May 6 22:13:40.321: INFO: Trying to get logs from node node1 pod pod-fdc950df-e0b2-4a6c-86b2-9e9e00001cc3 container test-container: STEP: delete the pod May 6 22:13:40.337: INFO: Waiting for pod pod-fdc950df-e0b2-4a6c-86b2-9e9e00001cc3 to disappear May 6 22:13:40.339: INFO: Pod pod-fdc950df-e0b2-4a6c-86b2-9e9e00001cc3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:40.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-494" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:09:34.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod liveness-399021ca-6525-4b3a-a9e8-75adf8cfcfad in namespace container-probe-2549 May 6 22:09:40.759: INFO: Started pod liveness-399021ca-6525-4b3a-a9e8-75adf8cfcfad in namespace container-probe-2549 STEP: checking the pod's current state and verifying that restartCount is present May 6 22:09:40.761: INFO: Initial restart count of pod liveness-399021ca-6525-4b3a-a9e8-75adf8cfcfad is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:41.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2549" for this suite. • [SLOW TEST:246.563 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":199,"failed":0} SSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:28.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating all guestbook components May 6 22:13:28.244: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend May 6 22:13:28.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 create -f -' May 6 22:13:28.681: INFO: stderr: "" May 6 22:13:28.681: INFO: stdout: "service/agnhost-replica created\n" May 6 22:13:28.681: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend May 6 22:13:28.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 create -f -' May 6 22:13:28.966: INFO: stderr: "" May 6 22:13:28.966: INFO: stdout: "service/agnhost-primary created\n" May 6 22:13:28.966: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 6 22:13:28.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 create -f -' May 6 22:13:29.305: INFO: stderr: "" May 6 22:13:29.305: INFO: stdout: "service/frontend created\n" May 6 22:13:29.305: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 6 22:13:29.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 create -f -' May 6 22:13:29.668: INFO: stderr: "" May 6 22:13:29.668: INFO: stdout: "deployment.apps/frontend created\n" May 6 22:13:29.668: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 22:13:29.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 create -f -' May 6 22:13:29.991: INFO: stderr: "" May 6 22:13:29.991: INFO: stdout: "deployment.apps/agnhost-primary created\n" May 6 22:13:29.992: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.32 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 22:13:29.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 create -f -' May 6 22:13:30.332: INFO: stderr: "" May 6 22:13:30.332: INFO: stdout: "deployment.apps/agnhost-replica created\n" STEP: validating guestbook app May 6 22:13:30.332: INFO: Waiting for all frontend pods to be Running. May 6 22:13:35.382: INFO: Waiting for frontend to serve content. May 6 22:13:36.390: INFO: Trying to add a new entry to the guestbook. May 6 22:13:36.396: INFO: Verifying that added entry can be retrieved. May 6 22:13:36.403: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources May 6 22:13:41.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 delete --grace-period=0 --force -f -' May 6 22:13:41.571: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:13:41.571: INFO: stdout: "service \"agnhost-replica\" force deleted\n" STEP: using delete to clean up resources May 6 22:13:41.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 delete --grace-period=0 --force -f -' May 6 22:13:41.715: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:13:41.715: INFO: stdout: "service \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 6 22:13:41.715: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 delete --grace-period=0 --force -f -' May 6 22:13:41.856: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:13:41.856: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 22:13:41.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 delete --grace-period=0 --force -f -' May 6 22:13:41.981: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:13:41.981: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 22:13:41.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 delete --grace-period=0 --force -f -' May 6 22:13:42.119: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:13:42.119: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" STEP: using delete to clean up resources May 6 22:13:42.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3879 delete --grace-period=0 --force -f -' May 6 22:13:42.262: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:13:42.262: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:42.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3879" for this suite. • [SLOW TEST:14.049 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336 should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":17,"skipped":323,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:29.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Setting up the test STEP: Creating hostNetwork=false pod May 6 22:13:29.111: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:31.115: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:33.116: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:35.116: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:37.115: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:39.115: INFO: The status of Pod test-pod is Running (Ready = true) STEP: Creating hostNetwork=true pod May 6 22:13:39.133: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:41.138: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:43.136: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:45.136: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:47.136: INFO: The status of Pod test-host-network-pod is Running (Ready = true) STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 6 22:13:47.139: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.139: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:47.335: INFO: Exec stderr: "" May 6 22:13:47.335: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.335: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:47.415: INFO: Exec stderr: "" May 6 22:13:47.415: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.415: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:47.502: INFO: Exec stderr: "" May 6 22:13:47.502: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.502: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:47.579: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 6 22:13:47.579: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.579: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:47.663: INFO: Exec stderr: "" May 6 22:13:47.663: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.663: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:47.769: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 6 22:13:47.769: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.769: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:47.849: INFO: Exec stderr: "" May 6 22:13:47.850: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.850: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:47.940: INFO: Exec stderr: "" May 6 22:13:47.940: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:47.940: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:48.030: INFO: Exec stderr: "" May 6 22:13:48.030: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1900 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:13:48.030: INFO: >>> kubeConfig: /root/.kube/config May 6 22:13:48.115: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:48.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1900" for this suite. • [SLOW TEST:19.049 seconds] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":463,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:41.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:13:41.327: INFO: The status of Pod busybox-host-aliases32e2592c-5c43-436f-ab1c-30d8e08c1e3e is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:43.330: INFO: The status of Pod busybox-host-aliases32e2592c-5c43-436f-ab1c-30d8e08c1e3e is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:45.331: INFO: The status of Pod busybox-host-aliases32e2592c-5c43-436f-ab1c-30d8e08c1e3e is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:47.331: INFO: The status of Pod busybox-host-aliases32e2592c-5c43-436f-ab1c-30d8e08c1e3e is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:49.331: INFO: The status of Pod busybox-host-aliases32e2592c-5c43-436f-ab1c-30d8e08c1e3e is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:51.332: INFO: The status of Pod busybox-host-aliases32e2592c-5c43-436f-ab1c-30d8e08c1e3e is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:51.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2458" for this suite. • [SLOW TEST:10.055 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox Pod with hostAliases /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:137 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":204,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:51.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:51.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6382" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":18,"skipped":359,"failed":0} SS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:40.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 6 22:13:48.957: INFO: Successfully updated pod "adopt-release-5ttzd" STEP: Checking that the Job readopts the Pod May 6 22:13:48.957: INFO: Waiting up to 15m0s for pod "adopt-release-5ttzd" in namespace "job-4977" to be "adopted" May 6 22:13:48.959: INFO: Pod "adopt-release-5ttzd": Phase="Running", Reason="", readiness=true. Elapsed: 2.292058ms May 6 22:13:50.963: INFO: Pod "adopt-release-5ttzd": Phase="Running", Reason="", readiness=true. Elapsed: 2.006206616s May 6 22:13:50.963: INFO: Pod "adopt-release-5ttzd" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 6 22:13:51.472: INFO: Successfully updated pod "adopt-release-5ttzd" STEP: Checking that the Job releases the Pod May 6 22:13:51.472: INFO: Waiting up to 15m0s for pod "adopt-release-5ttzd" in namespace "job-4977" to be "released" May 6 22:13:51.475: INFO: Pod "adopt-release-5ttzd": Phase="Running", Reason="", readiness=true. Elapsed: 2.274452ms May 6 22:13:53.481: INFO: Pod "adopt-release-5ttzd": Phase="Running", Reason="", readiness=true. Elapsed: 2.008661073s May 6 22:13:53.481: INFO: Pod "adopt-release-5ttzd" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:53.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4977" for this suite. • [SLOW TEST:13.079 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":20,"skipped":379,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:53.505: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename ingressclass STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:149 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/networking.k8s.io STEP: getting /apis/networking.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 6 22:13:53.546: INFO: starting watch STEP: patching STEP: updating May 6 22:13:53.552: INFO: waiting for watch events with expected annotations May 6 22:13:53.552: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:53.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "ingressclass-8039" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":21,"skipped":385,"failed":0} SSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:39.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for all pods to be running May 6 22:13:41.838: INFO: running pods: 0 < 3 May 6 22:13:43.841: INFO: running pods: 0 < 3 May 6 22:13:45.842: INFO: running pods: 0 < 3 May 6 22:13:47.845: INFO: running pods: 0 < 3 May 6 22:13:49.843: INFO: running pods: 1 < 3 May 6 22:13:51.842: INFO: running pods: 1 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:53.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-5164" for this suite. • [SLOW TEST:14.079 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":33,"skipped":542,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:53.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:53.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3728" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":34,"skipped":549,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:39.836: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service multi-endpoint-test in namespace services-1027 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1027 to expose endpoints map[] May 6 22:13:39.869: INFO: successfully validated that service multi-endpoint-test in namespace services-1027 exposes endpoints map[] STEP: Creating pod pod1 in namespace services-1027 May 6 22:13:39.883: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:41.887: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:43.887: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:45.886: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:47.888: INFO: The status of Pod pod1 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1027 to expose endpoints map[pod1:[100]] May 6 22:13:47.900: INFO: successfully validated that service multi-endpoint-test in namespace services-1027 exposes endpoints map[pod1:[100]] STEP: Creating pod pod2 in namespace services-1027 May 6 22:13:47.914: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:49.917: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:51.918: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:53.918: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:55.916: INFO: The status of Pod pod2 is Running (Ready = true) STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1027 to expose endpoints map[pod1:[100] pod2:[101]] May 6 22:13:55.932: INFO: successfully validated that service multi-endpoint-test in namespace services-1027 exposes endpoints map[pod1:[100] pod2:[101]] STEP: Deleting pod pod1 in namespace services-1027 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1027 to expose endpoints map[pod2:[101]] May 6 22:13:55.946: INFO: successfully validated that service multi-endpoint-test in namespace services-1027 exposes endpoints map[pod2:[101]] STEP: Deleting pod pod2 in namespace services-1027 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1027 to expose endpoints map[] May 6 22:13:55.958: INFO: successfully validated that service multi-endpoint-test in namespace services-1027 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:13:55.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1027" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:16.139 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:53.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:13:53.991: INFO: The status of Pod busybox-scheduling-9577d56f-b5ef-449a-92a9-9bf0feea04df is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:55.993: INFO: The status of Pod busybox-scheduling-9577d56f-b5ef-449a-92a9-9bf0feea04df is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:57.995: INFO: The status of Pod busybox-scheduling-9577d56f-b5ef-449a-92a9-9bf0feea04df is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:59.995: INFO: The status of Pod busybox-scheduling-9577d56f-b5ef-449a-92a9-9bf0feea04df is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:00.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3055" for this suite. • [SLOW TEST:6.059 seconds] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when scheduling a busybox command in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:41 should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":564,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:42.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:13:42.311: INFO: Creating ReplicaSet my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758 May 6 22:13:42.317: INFO: Pod name my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758: Found 0 pods out of 1 May 6 22:13:47.322: INFO: Pod name my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758: Found 1 pods out of 1 May 6 22:13:47.322: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758" is running May 6 22:13:55.333: INFO: Pod "my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758-4brtp" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 22:13:42 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 22:13:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 22:13:42 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 22:13:42 +0000 UTC Reason: Message:}]) May 6 22:13:55.334: INFO: Trying to dial the pod May 6 22:14:00.347: INFO: Controller my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758: Got expected result from replica 1 [my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758-4brtp]: "my-hostname-basic-1e896030-4fab-4fcb-a39c-c27935ee0758-4brtp", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:00.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2848" for this suite. • [SLOW TEST:18.067 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":18,"skipped":331,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:51.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in container's args May 6 22:13:51.693: INFO: Waiting up to 5m0s for pod "var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a" in namespace "var-expansion-3015" to be "Succeeded or Failed" May 6 22:13:51.696: INFO: Pod "var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.947408ms May 6 22:13:53.700: INFO: Pod "var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007590461s May 6 22:13:55.704: INFO: Pod "var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011377048s May 6 22:13:57.708: INFO: Pod "var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.014994875s May 6 22:13:59.711: INFO: Pod "var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.018550322s May 6 22:14:01.715: INFO: Pod "var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.022200415s STEP: Saw pod success May 6 22:14:01.715: INFO: Pod "var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a" satisfied condition "Succeeded or Failed" May 6 22:14:01.717: INFO: Trying to get logs from node node2 pod var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a container dapi-container: STEP: delete the pod May 6 22:14:01.728: INFO: Waiting for pod var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a to disappear May 6 22:14:01.730: INFO: Pod var-expansion-b922bd9e-84f2-40e0-8b32-6b357820168a no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:01.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3015" for this suite. • [SLOW TEST:10.079 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":361,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:48.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: submitting the pod to kubernetes May 6 22:13:48.171: INFO: The status of Pod pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:50.174: INFO: The status of Pod pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:52.175: INFO: The status of Pod pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:54.176: INFO: The status of Pod pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:56.175: INFO: The status of Pod pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe is Pending, waiting for it to be Running (with Ready = true) May 6 22:13:58.174: INFO: The status of Pod pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe is Running (Ready = true) STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 22:13:58.689: INFO: Successfully updated pod "pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe" May 6 22:13:58.689: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe" in namespace "pods-7444" to be "terminated due to deadline exceeded" May 6 22:13:58.692: INFO: Pod "pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe": Phase="Running", Reason="", readiness=true. Elapsed: 2.302394ms May 6 22:14:00.696: INFO: Pod "pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe": Phase="Running", Reason="", readiness=true. Elapsed: 2.006475243s May 6 22:14:02.699: INFO: Pod "pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.009263009s May 6 22:14:02.699: INFO: Pod "pod-update-activedeadlineseconds-84f679ed-5f0a-4f7c-a9dd-19e6f29c6dfe" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:02.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7444" for this suite. • [SLOW TEST:14.573 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":467,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":23,"skipped":271,"failed":0} [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:35 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:55.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sysctl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64 [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with the kernel.shm_rmid_forced sysctl STEP: Watching for error events or started pod STEP: Waiting for pod completion STEP: Checking that the pod succeeded STEP: Getting logs from the pod STEP: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:04.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sysctl-6136" for this suite. • [SLOW TEST:8.059 seconds] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":24,"skipped":271,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:00.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 22:14:00.079: INFO: Waiting up to 5m0s for pod "pod-d1353884-0691-4096-bf31-9ecd7b7ad30a" in namespace "emptydir-7476" to be "Succeeded or Failed" May 6 22:14:00.081: INFO: Pod "pod-d1353884-0691-4096-bf31-9ecd7b7ad30a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072037ms May 6 22:14:02.085: INFO: Pod "pod-d1353884-0691-4096-bf31-9ecd7b7ad30a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005477493s May 6 22:14:04.089: INFO: Pod "pod-d1353884-0691-4096-bf31-9ecd7b7ad30a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009427114s May 6 22:14:06.093: INFO: Pod "pod-d1353884-0691-4096-bf31-9ecd7b7ad30a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013178467s STEP: Saw pod success May 6 22:14:06.093: INFO: Pod "pod-d1353884-0691-4096-bf31-9ecd7b7ad30a" satisfied condition "Succeeded or Failed" May 6 22:14:06.095: INFO: Trying to get logs from node node2 pod pod-d1353884-0691-4096-bf31-9ecd7b7ad30a container test-container: STEP: delete the pod May 6 22:14:06.126: INFO: Waiting for pod pod-d1353884-0691-4096-bf31-9ecd7b7ad30a to disappear May 6 22:14:06.128: INFO: Pod pod-d1353884-0691-4096-bf31-9ecd7b7ad30a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:06.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7476" for this suite. • [SLOW TEST:6.092 seconds] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":578,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:00.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating the pod May 6 22:14:00.422: INFO: The status of Pod annotationupdate011fe303-265a-4867-a7da-27d07356d0c7 is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:02.425: INFO: The status of Pod annotationupdate011fe303-265a-4867-a7da-27d07356d0c7 is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:04.425: INFO: The status of Pod annotationupdate011fe303-265a-4867-a7da-27d07356d0c7 is Running (Ready = true) May 6 22:14:04.955: INFO: Successfully updated pod "annotationupdate011fe303-265a-4867-a7da-27d07356d0c7" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:06.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-350" for this suite. • [SLOW TEST:6.591 seconds] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:02.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:14:02.791: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:08.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6997" for this suite. • [SLOW TEST:5.559 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":27,"skipped":500,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:53.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:09.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1088" for this suite. • [SLOW TEST:16.115 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":22,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:07.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating Agnhost RC May 6 22:14:07.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9182 create -f -' May 6 22:14:07.460: INFO: stderr: "" May 6 22:14:07.460: INFO: stdout: "replicationcontroller/agnhost-primary created\n" STEP: Waiting for Agnhost primary to start. May 6 22:14:08.463: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:14:08.463: INFO: Found 0 / 1 May 6 22:14:09.464: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:14:09.464: INFO: Found 0 / 1 May 6 22:14:10.466: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:14:10.466: INFO: Found 0 / 1 May 6 22:14:11.464: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:14:11.464: INFO: Found 1 / 1 May 6 22:14:11.464: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 6 22:14:11.466: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:14:11.467: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 22:14:11.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9182 patch pod agnhost-primary-vgg2n -p {"metadata":{"annotations":{"x":"y"}}}' May 6 22:14:11.633: INFO: stderr: "" May 6 22:14:11.633: INFO: stdout: "pod/agnhost-primary-vgg2n patched\n" STEP: checking annotations May 6 22:14:11.635: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:14:11.635: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:11.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9182" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":20,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:08.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-map-e8ef892e-b997-4434-93f2-687e6541cb83 STEP: Creating a pod to test consume configMaps May 6 22:14:08.431: INFO: Waiting up to 5m0s for pod "pod-configmaps-7359f35f-38a7-4bd5-bf00-5230c9772b78" in namespace "configmap-6000" to be "Succeeded or Failed" May 6 22:14:08.433: INFO: Pod "pod-configmaps-7359f35f-38a7-4bd5-bf00-5230c9772b78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072188ms May 6 22:14:10.436: INFO: Pod "pod-configmaps-7359f35f-38a7-4bd5-bf00-5230c9772b78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.004991272s May 6 22:14:12.440: INFO: Pod "pod-configmaps-7359f35f-38a7-4bd5-bf00-5230c9772b78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009088164s STEP: Saw pod success May 6 22:14:12.441: INFO: Pod "pod-configmaps-7359f35f-38a7-4bd5-bf00-5230c9772b78" satisfied condition "Succeeded or Failed" May 6 22:14:12.443: INFO: Trying to get logs from node node1 pod pod-configmaps-7359f35f-38a7-4bd5-bf00-5230c9772b78 container agnhost-container: STEP: delete the pod May 6 22:14:12.459: INFO: Waiting for pod pod-configmaps-7359f35f-38a7-4bd5-bf00-5230c9772b78 to disappear May 6 22:14:12.461: INFO: Pod pod-configmaps-7359f35f-38a7-4bd5-bf00-5230c9772b78 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:12.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6000" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":543,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:11.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:14:11.703: INFO: Waiting up to 5m0s for pod "downwardapi-volume-160219cb-44b7-4ee2-a583-49449fcac4ae" in namespace "downward-api-8665" to be "Succeeded or Failed" May 6 22:14:11.706: INFO: Pod "downwardapi-volume-160219cb-44b7-4ee2-a583-49449fcac4ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35099ms May 6 22:14:13.709: INFO: Pod "downwardapi-volume-160219cb-44b7-4ee2-a583-49449fcac4ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005683914s May 6 22:14:15.713: INFO: Pod "downwardapi-volume-160219cb-44b7-4ee2-a583-49449fcac4ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009588357s STEP: Saw pod success May 6 22:14:15.713: INFO: Pod "downwardapi-volume-160219cb-44b7-4ee2-a583-49449fcac4ae" satisfied condition "Succeeded or Failed" May 6 22:14:15.716: INFO: Trying to get logs from node node1 pod downwardapi-volume-160219cb-44b7-4ee2-a583-49449fcac4ae container client-container: STEP: delete the pod May 6 22:14:15.835: INFO: Waiting for pod downwardapi-volume-160219cb-44b7-4ee2-a583-49449fcac4ae to disappear May 6 22:14:15.836: INFO: Pod downwardapi-volume-160219cb-44b7-4ee2-a583-49449fcac4ae no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:15.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8665" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":381,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:09.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:09.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption-2 STEP: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be processed STEP: listing a collection of PDBs across all namespaces STEP: listing a collection of PDBs in namespace disruption-3189 STEP: deleting a collection of PDBs STEP: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:15.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-2-3330" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:15.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-3189" for this suite. • [SLOW TEST:6.103 seconds] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75 should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":23,"skipped":435,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:15.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ConfigMap STEP: fetching the ConfigMap STEP: patching the ConfigMap STEP: listing all ConfigMaps in all namespaces with a label selector STEP: deleting the ConfigMap by collection with a label selector STEP: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:15.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1298" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":24,"skipped":448,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:12.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name projected-configmap-test-volume-map-3bb05bed-9faa-4b43-9c64-c3549e58613f STEP: Creating a pod to test consume configMaps May 6 22:14:12.537: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e0364839-758c-4784-84eb-a97278cf5788" in namespace "projected-1966" to be "Succeeded or Failed" May 6 22:14:12.539: INFO: Pod "pod-projected-configmaps-e0364839-758c-4784-84eb-a97278cf5788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2139ms May 6 22:14:14.542: INFO: Pod "pod-projected-configmaps-e0364839-758c-4784-84eb-a97278cf5788": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005073858s May 6 22:14:16.546: INFO: Pod "pod-projected-configmaps-e0364839-758c-4784-84eb-a97278cf5788": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.008906299s STEP: Saw pod success May 6 22:14:16.546: INFO: Pod "pod-projected-configmaps-e0364839-758c-4784-84eb-a97278cf5788" satisfied condition "Succeeded or Failed" May 6 22:14:16.548: INFO: Trying to get logs from node node1 pod pod-projected-configmaps-e0364839-758c-4784-84eb-a97278cf5788 container agnhost-container: STEP: delete the pod May 6 22:14:16.563: INFO: Waiting for pod pod-projected-configmaps-e0364839-758c-4784-84eb-a97278cf5788 to disappear May 6 22:14:16.565: INFO: Pod pod-projected-configmaps-e0364839-758c-4784-84eb-a97278cf5788 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:16.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1966" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":561,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:16.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-370.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-370.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-370.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-370.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-370.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-370.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:14:22.656: INFO: DNS probes using dns-370/dns-test-b414ed0f-2bf2-4644-b938-1a9c206af92d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:22.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-370" for this suite. • [SLOW TEST:6.079 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":571,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:15.862: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:14:16.673: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created May 6 22:14:18.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472056, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472056, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472056, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472056, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:14:20.685: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472056, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472056, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472056, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472056, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:14:23.690: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:23.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5058" for this suite. STEP: Destroying namespace "webhook-5058-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.886 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":22,"skipped":389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:01.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: set up a multi version CRD May 6 22:14:01.767: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:25.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1783" for this suite. • [SLOW TEST:23.793 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":20,"skipped":363,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:23.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:14:23.820: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5fa0ef64-389e-423b-bf07-e76c3c634255" in namespace "projected-6528" to be "Succeeded or Failed" May 6 22:14:23.822: INFO: Pod "downwardapi-volume-5fa0ef64-389e-423b-bf07-e76c3c634255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.241724ms May 6 22:14:25.826: INFO: Pod "downwardapi-volume-5fa0ef64-389e-423b-bf07-e76c3c634255": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006449895s May 6 22:14:27.830: INFO: Pod "downwardapi-volume-5fa0ef64-389e-423b-bf07-e76c3c634255": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010014237s STEP: Saw pod success May 6 22:14:27.830: INFO: Pod "downwardapi-volume-5fa0ef64-389e-423b-bf07-e76c3c634255" satisfied condition "Succeeded or Failed" May 6 22:14:27.832: INFO: Trying to get logs from node node2 pod downwardapi-volume-5fa0ef64-389e-423b-bf07-e76c3c634255 container client-container: STEP: delete the pod May 6 22:14:27.845: INFO: Waiting for pod downwardapi-volume-5fa0ef64-389e-423b-bf07-e76c3c634255 to disappear May 6 22:14:27.849: INFO: Pod downwardapi-volume-5fa0ef64-389e-423b-bf07-e76c3c634255 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:27.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6528" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":412,"failed":0} SSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:27.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:27.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9219" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":24,"skipped":415,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:25.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:14:25.600: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 6 22:14:25.615: INFO: The status of Pod pod-logs-websocket-d9b5cd1b-5e75-4d97-a6d6-1825b17b6ab6 is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:27.619: INFO: The status of Pod pod-logs-websocket-d9b5cd1b-5e75-4d97-a6d6-1825b17b6ab6 is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:29.618: INFO: The status of Pod pod-logs-websocket-d9b5cd1b-5e75-4d97-a6d6-1825b17b6ab6 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:29.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7541" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":383,"failed":0} SSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:27.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 6 22:14:27.982: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-7471 82ae4899-02fa-4533-b90d-bee768eaef18 45696 0 2022-05-06 22:14:27 +0000 UTC map[] map[kubernetes.io/psp:collectd] [] [] [{e2e.test Update v1 2022-05-06 22:14:27 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nnpbf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnpbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 22:14:27.985: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:29.988: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:31.989: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 6 22:14:31.990: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-7471 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:14:31.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Verifying customized DNS server is configured on pod... May 6 22:14:32.126: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-7471 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:14:32.126: INFO: >>> kubeConfig: /root/.kube/config May 6 22:14:32.231: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:32.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7471" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":25,"skipped":431,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:22.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:14:22.720: INFO: Pod name sample-pod: Found 0 pods out of 1 May 6 22:14:27.726: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: Scaling up "test-rs" replicaset May 6 22:14:27.754: INFO: Updating replica set "test-rs" STEP: patching the ReplicaSet May 6 22:14:27.762: INFO: observed ReplicaSet test-rs in namespace replicaset-6198 with ReadyReplicas 1, AvailableReplicas 1 May 6 22:14:27.775: INFO: observed ReplicaSet test-rs in namespace replicaset-6198 with ReadyReplicas 1, AvailableReplicas 1 May 6 22:14:27.784: INFO: observed ReplicaSet test-rs in namespace replicaset-6198 with ReadyReplicas 1, AvailableReplicas 1 May 6 22:14:27.787: INFO: observed ReplicaSet test-rs in namespace replicaset-6198 with ReadyReplicas 1, AvailableReplicas 1 May 6 22:14:32.724: INFO: observed ReplicaSet test-rs in namespace replicaset-6198 with ReadyReplicas 2, AvailableReplicas 2 May 6 22:14:33.792: INFO: observed Replicaset test-rs in namespace replicaset-6198 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:33.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6198" for this suite. • [SLOW TEST:11.109 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":31,"skipped":581,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:32.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:38.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9389" for this suite. • [SLOW TEST:6.052 seconds] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":444,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:13.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6974 STEP: creating service affinity-nodeport in namespace services-6974 STEP: creating replication controller affinity-nodeport in namespace services-6974 I0506 22:12:13.152485 37 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-6974, replica count: 3 I0506 22:12:16.203330 37 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:12:19.204568 37 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:12:19.217: INFO: Creating new exec pod May 6 22:12:26.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' May 6 22:12:26.487: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" May 6 22:12:26.487: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:12:26.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.54.169 80' May 6 22:12:27.090: INFO: stderr: "+ nc -v -t -w 2 10.233.54.169 80\nConnection to 10.233.54.169 80 port [tcp/http] succeeded!\n+ echo hostName\n" May 6 22:12:27.090: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:12:27.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:27.431: INFO: rc: 1 May 6 22:12:27.431: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:28.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:28.681: INFO: rc: 1 May 6 22:12:28.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:29.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:29.654: INFO: rc: 1 May 6 22:12:29.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:30.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:30.701: INFO: rc: 1 May 6 22:12:30.702: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:31.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:31.684: INFO: rc: 1 May 6 22:12:31.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:32.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:32.687: INFO: rc: 1 May 6 22:12:32.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:33.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:33.654: INFO: rc: 1 May 6 22:12:33.654: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:34.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:34.754: INFO: rc: 1 May 6 22:12:34.754: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:35.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:36.032: INFO: rc: 1 May 6 22:12:36.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:36.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:37.011: INFO: rc: 1 May 6 22:12:37.011: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:37.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:37.692: INFO: rc: 1 May 6 22:12:37.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:38.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:39.088: INFO: rc: 1 May 6 22:12:39.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:39.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:39.671: INFO: rc: 1 May 6 22:12:39.671: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:40.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:40.818: INFO: rc: 1 May 6 22:12:40.818: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:41.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:41.720: INFO: rc: 1 May 6 22:12:41.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:42.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:43.157: INFO: rc: 1 May 6 22:12:43.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:43.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:43.720: INFO: rc: 1 May 6 22:12:43.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:44.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:44.678: INFO: rc: 1 May 6 22:12:44.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:45.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:45.712: INFO: rc: 1 May 6 22:12:45.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:46.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:46.673: INFO: rc: 1 May 6 22:12:46.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:47.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:47.673: INFO: rc: 1 May 6 22:12:47.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:48.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:48.689: INFO: rc: 1 May 6 22:12:48.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:49.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:49.744: INFO: rc: 1 May 6 22:12:49.744: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:50.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:50.711: INFO: rc: 1 May 6 22:12:50.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:51.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:51.836: INFO: rc: 1 May 6 22:12:51.836: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:52.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:52.692: INFO: rc: 1 May 6 22:12:52.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:53.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:53.675: INFO: rc: 1 May 6 22:12:53.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:54.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:54.678: INFO: rc: 1 May 6 22:12:54.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:55.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:55.665: INFO: rc: 1 May 6 22:12:55.665: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:56.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:56.947: INFO: rc: 1 May 6 22:12:56.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:57.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:57.719: INFO: rc: 1 May 6 22:12:57.719: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:58.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:58.762: INFO: rc: 1 May 6 22:12:58.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:12:59.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:12:59.701: INFO: rc: 1 May 6 22:12:59.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:00.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:00.690: INFO: rc: 1 May 6 22:13:00.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:01.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:01.703: INFO: rc: 1 May 6 22:13:01.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:02.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:02.799: INFO: rc: 1 May 6 22:13:02.799: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:03.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:03.693: INFO: rc: 1 May 6 22:13:03.693: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:04.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:04.667: INFO: rc: 1 May 6 22:13:04.667: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:05.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:06.013: INFO: rc: 1 May 6 22:13:06.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:06.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:06.747: INFO: rc: 1 May 6 22:13:06.747: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:07.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:07.763: INFO: rc: 1 May 6 22:13:07.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:08.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:09.404: INFO: rc: 1 May 6 22:13:09.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:09.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:09.814: INFO: rc: 1 May 6 22:13:09.814: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:10.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:10.874: INFO: rc: 1 May 6 22:13:10.874: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:11.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:11.717: INFO: rc: 1 May 6 22:13:11.717: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:12.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:12.743: INFO: rc: 1 May 6 22:13:12.744: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:13.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:13.700: INFO: rc: 1 May 6 22:13:13.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:14.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:14.683: INFO: rc: 1 May 6 22:13:14.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:15.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:15.687: INFO: rc: 1 May 6 22:13:15.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:16.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:16.680: INFO: rc: 1 May 6 22:13:16.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:17.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:17.677: INFO: rc: 1 May 6 22:13:17.677: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:18.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:18.682: INFO: rc: 1 May 6 22:13:18.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:19.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:19.919: INFO: rc: 1 May 6 22:13:19.919: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:20.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:20.696: INFO: rc: 1 May 6 22:13:20.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:21.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:21.677: INFO: rc: 1 May 6 22:13:21.677: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:22.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:22.709: INFO: rc: 1 May 6 22:13:22.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:23.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:23.854: INFO: rc: 1 May 6 22:13:23.854: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:24.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:24.699: INFO: rc: 1 May 6 22:13:24.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:25.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:25.685: INFO: rc: 1 May 6 22:13:25.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:26.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:26.963: INFO: rc: 1 May 6 22:13:26.963: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:27.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:27.689: INFO: rc: 1 May 6 22:13:27.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:28.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:28.686: INFO: rc: 1 May 6 22:13:28.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:29.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:29.676: INFO: rc: 1 May 6 22:13:29.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:30.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:31.023: INFO: rc: 1 May 6 22:13:31.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:31.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:31.737: INFO: rc: 1 May 6 22:13:31.737: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:32.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:32.971: INFO: rc: 1 May 6 22:13:32.971: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:33.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:33.685: INFO: rc: 1 May 6 22:13:33.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:34.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:34.698: INFO: rc: 1 May 6 22:13:34.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:35.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:35.741: INFO: rc: 1 May 6 22:13:35.741: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:36.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:36.684: INFO: rc: 1 May 6 22:13:36.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:37.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:37.669: INFO: rc: 1 May 6 22:13:37.669: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:38.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:39.135: INFO: rc: 1 May 6 22:13:39.135: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:39.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:39.708: INFO: rc: 1 May 6 22:13:39.708: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:40.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:40.725: INFO: rc: 1 May 6 22:13:40.725: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:41.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:41.804: INFO: rc: 1 May 6 22:13:41.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:42.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:43.003: INFO: rc: 1 May 6 22:13:43.003: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:43.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:43.949: INFO: rc: 1 May 6 22:13:43.949: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:44.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:44.954: INFO: rc: 1 May 6 22:13:44.954: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:45.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:46.211: INFO: rc: 1 May 6 22:13:46.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:46.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:46.934: INFO: rc: 1 May 6 22:13:46.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:47.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:47.678: INFO: rc: 1 May 6 22:13:47.678: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:48.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:48.710: INFO: rc: 1 May 6 22:13:48.710: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:49.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:49.804: INFO: rc: 1 May 6 22:13:49.804: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:50.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:50.698: INFO: rc: 1 May 6 22:13:50.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:51.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:51.695: INFO: rc: 1 May 6 22:13:51.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:52.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:53.205: INFO: rc: 1 May 6 22:13:53.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:53.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:53.831: INFO: rc: 1 May 6 22:13:53.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:54.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:54.681: INFO: rc: 1 May 6 22:13:54.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:55.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:55.842: INFO: rc: 1 May 6 22:13:55.842: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:56.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:56.690: INFO: rc: 1 May 6 22:13:56.690: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:57.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:57.831: INFO: rc: 1 May 6 22:13:57.831: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:58.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:13:58.886: INFO: rc: 1 May 6 22:13:58.886: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:13:59.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:00.023: INFO: rc: 1 May 6 22:14:00.023: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:00.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:00.797: INFO: rc: 1 May 6 22:14:00.797: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:01.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:01.697: INFO: rc: 1 May 6 22:14:01.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:02.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:02.705: INFO: rc: 1 May 6 22:14:02.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:03.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:03.822: INFO: rc: 1 May 6 22:14:03.822: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:04.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:04.692: INFO: rc: 1 May 6 22:14:04.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:05.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:06.024: INFO: rc: 1 May 6 22:14:06.024: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:06.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:06.760: INFO: rc: 1 May 6 22:14:06.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:07.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:07.705: INFO: rc: 1 May 6 22:14:07.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:08.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:09.233: INFO: rc: 1 May 6 22:14:09.234: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:09.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:09.774: INFO: rc: 1 May 6 22:14:09.775: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:10.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:10.760: INFO: rc: 1 May 6 22:14:10.760: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:11.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:11.672: INFO: rc: 1 May 6 22:14:11.672: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:12.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:12.697: INFO: rc: 1 May 6 22:14:12.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:13.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:13.674: INFO: rc: 1 May 6 22:14:13.675: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:14.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:14.673: INFO: rc: 1 May 6 22:14:14.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:15.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:15.670: INFO: rc: 1 May 6 22:14:15.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:16.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:16.681: INFO: rc: 1 May 6 22:14:16.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:17.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:17.704: INFO: rc: 1 May 6 22:14:17.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:18.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:18.656: INFO: rc: 1 May 6 22:14:18.656: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:19.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:19.673: INFO: rc: 1 May 6 22:14:19.673: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:20.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:20.687: INFO: rc: 1 May 6 22:14:20.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:21.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:21.713: INFO: rc: 1 May 6 22:14:21.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:22.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:22.681: INFO: rc: 1 May 6 22:14:22.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:23.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:23.718: INFO: rc: 1 May 6 22:14:23.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:24.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:24.680: INFO: rc: 1 May 6 22:14:24.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:25.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:25.712: INFO: rc: 1 May 6 22:14:25.712: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:26.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:26.754: INFO: rc: 1 May 6 22:14:26.755: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:27.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:27.705: INFO: rc: 1 May 6 22:14:27.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:27.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746' May 6 22:14:27.968: INFO: rc: 1 May 6 22:14:27.968: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6974 exec execpod-affinitycsw8q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32746: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32746 + echo hostName nc: connect to 10.10.190.207 port 32746 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:27.968: FAIL: Unexpected error: <*errors.errorString | 0xc003c343d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32746 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32746 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0017d98c0, 0x77b33d8, 0xc004fa1600, 0xc001776280, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2535 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1829 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001903980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001903980) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001903980, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 6 22:14:27.970: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport in namespace services-6974, will wait for the garbage collector to delete the pods May 6 22:14:28.042: INFO: Deleting ReplicationController affinity-nodeport took: 3.278911ms May 6 22:14:28.143: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.219579ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6974". STEP: Found 27 events. May 6 22:14:37.361: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-24l5z: { } Scheduled: Successfully assigned services-6974/affinity-nodeport-24l5z to node2 May 6 22:14:37.361: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-d4f7m: { } Scheduled: Successfully assigned services-6974/affinity-nodeport-d4f7m to node2 May 6 22:14:37.361: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-sf56b: { } Scheduled: Successfully assigned services-6974/affinity-nodeport-sf56b to node1 May 6 22:14:37.361: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinitycsw8q: { } Scheduled: Successfully assigned services-6974/execpod-affinitycsw8q to node2 May 6 22:14:37.361: INFO: At 2022-05-06 22:12:13 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-sf56b May 6 22:14:37.361: INFO: At 2022-05-06 22:12:13 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-d4f7m May 6 22:14:37.361: INFO: At 2022-05-06 22:12:13 +0000 UTC - event for affinity-nodeport: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-24l5z May 6 22:14:37.361: INFO: At 2022-05-06 22:12:15 +0000 UTC - event for affinity-nodeport-sf56b: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:14:37.361: INFO: At 2022-05-06 22:12:16 +0000 UTC - event for affinity-nodeport-24l5z: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:14:37.361: INFO: At 2022-05-06 22:12:16 +0000 UTC - event for affinity-nodeport-d4f7m: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:14:37.361: INFO: At 2022-05-06 22:12:16 +0000 UTC - event for affinity-nodeport-sf56b: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 634.267447ms May 6 22:14:37.361: INFO: At 2022-05-06 22:12:16 +0000 UTC - event for affinity-nodeport-sf56b: {kubelet node1} Created: Created container affinity-nodeport May 6 22:14:37.361: INFO: At 2022-05-06 22:12:16 +0000 UTC - event for affinity-nodeport-sf56b: {kubelet node1} Started: Started container affinity-nodeport May 6 22:14:37.361: INFO: At 2022-05-06 22:12:17 +0000 UTC - event for affinity-nodeport-24l5z: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 603.951851ms May 6 22:14:37.361: INFO: At 2022-05-06 22:12:17 +0000 UTC - event for affinity-nodeport-24l5z: {kubelet node2} Created: Created container affinity-nodeport May 6 22:14:37.361: INFO: At 2022-05-06 22:12:17 +0000 UTC - event for affinity-nodeport-24l5z: {kubelet node2} Started: Started container affinity-nodeport May 6 22:14:37.362: INFO: At 2022-05-06 22:12:17 +0000 UTC - event for affinity-nodeport-d4f7m: {kubelet node2} Created: Created container affinity-nodeport May 6 22:14:37.362: INFO: At 2022-05-06 22:12:17 +0000 UTC - event for affinity-nodeport-d4f7m: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 893.668495ms May 6 22:14:37.362: INFO: At 2022-05-06 22:12:18 +0000 UTC - event for affinity-nodeport-d4f7m: {kubelet node2} Started: Started container affinity-nodeport May 6 22:14:37.362: INFO: At 2022-05-06 22:12:21 +0000 UTC - event for execpod-affinitycsw8q: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:14:37.362: INFO: At 2022-05-06 22:12:22 +0000 UTC - event for execpod-affinitycsw8q: {kubelet node2} Created: Created container agnhost-container May 6 22:14:37.362: INFO: At 2022-05-06 22:12:22 +0000 UTC - event for execpod-affinitycsw8q: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 265.64138ms May 6 22:14:37.362: INFO: At 2022-05-06 22:12:23 +0000 UTC - event for execpod-affinitycsw8q: {kubelet node2} Started: Started container agnhost-container May 6 22:14:37.362: INFO: At 2022-05-06 22:14:27 +0000 UTC - event for execpod-affinitycsw8q: {kubelet node2} Killing: Stopping container agnhost-container May 6 22:14:37.362: INFO: At 2022-05-06 22:14:28 +0000 UTC - event for affinity-nodeport-24l5z: {kubelet node2} Killing: Stopping container affinity-nodeport May 6 22:14:37.362: INFO: At 2022-05-06 22:14:28 +0000 UTC - event for affinity-nodeport-d4f7m: {kubelet node2} Killing: Stopping container affinity-nodeport May 6 22:14:37.362: INFO: At 2022-05-06 22:14:28 +0000 UTC - event for affinity-nodeport-sf56b: {kubelet node1} Killing: Stopping container affinity-nodeport May 6 22:14:37.364: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:14:37.364: INFO: May 6 22:14:37.369: INFO: Logging node info for node master1 May 6 22:14:37.372: INFO: Node Info: &Node{ObjectMeta:{master1 3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 45860 0 2022-05-06 20:07:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:33 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:33 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:33 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:14:33 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:14:37.372: INFO: Logging kubelet events for node master1 May 6 22:14:37.374: INFO: Logging pods the kubelet thinks is on node master1 May 6 22:14:37.388: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.388: INFO: Container kube-scheduler ready: true, restart count 0 May 6 22:14:37.388: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.388: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:14:37.388: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:14:37.388: INFO: Init container install-cni ready: true, restart count 0 May 6 22:14:37.388: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:14:37.388: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.388: INFO: Container coredns ready: true, restart count 1 May 6 22:14:37.388: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded) May 6 22:14:37.388: INFO: Container docker-registry ready: true, restart count 0 May 6 22:14:37.388: INFO: Container nginx ready: true, restart count 0 May 6 22:14:37.388: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.388: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:14:37.388: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.388: INFO: Container kube-controller-manager ready: true, restart count 2 May 6 22:14:37.388: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.388: INFO: Container kube-multus ready: true, restart count 1 May 6 22:14:37.388: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:14:37.388: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:14:37.388: INFO: Container node-exporter ready: true, restart count 0 May 6 22:14:37.483: INFO: Latency metrics for node master1 May 6 22:14:37.483: INFO: Logging node info for node master2 May 6 22:14:37.485: INFO: Node Info: &Node{ObjectMeta:{master2 0aed38bc-6408-4920-b364-7d6b9bff7102 45733 0 2022-05-06 20:08:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:28 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:28 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:28 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:14:28 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:14:37.486: INFO: Logging kubelet events for node master2 May 6 22:14:37.488: INFO: Logging pods the kubelet thinks is on node master2 May 6 22:14:37.501: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.501: INFO: Container autoscaler ready: true, restart count 1 May 6 22:14:37.501: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:14:37.501: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:14:37.501: INFO: Container node-exporter ready: true, restart count 0 May 6 22:14:37.501: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.501: INFO: Container kube-controller-manager ready: true, restart count 1 May 6 22:14:37.501: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.501: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:14:37.501: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:14:37.501: INFO: Init container install-cni ready: true, restart count 0 May 6 22:14:37.501: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:14:37.501: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.501: INFO: Container kube-multus ready: true, restart count 1 May 6 22:14:37.501: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.502: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:14:37.502: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.502: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:14:37.606: INFO: Latency metrics for node master2 May 6 22:14:37.606: INFO: Logging node info for node master3 May 6 22:14:37.609: INFO: Node Info: &Node{ObjectMeta:{master3 1cc41c26-3708-4912-8ff5-aa83b70d989e 45940 0 2022-05-06 20:08:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:36 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:36 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:36 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:14:36 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:14:37.610: INFO: Logging kubelet events for node master3 May 6 22:14:37.612: INFO: Logging pods the kubelet thinks is on node master3 May 6 22:14:37.628: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:14:37.628: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:14:37.628: INFO: Container node-exporter ready: true, restart count 0 May 6 22:14:37.628: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.628: INFO: Container kube-controller-manager ready: true, restart count 3 May 6 22:14:37.628: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.628: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:14:37.628: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.628: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:14:37.628: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:14:37.628: INFO: Init container install-cni ready: true, restart count 2 May 6 22:14:37.628: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:14:37.628: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.628: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:14:37.628: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.628: INFO: Container kube-multus ready: true, restart count 1 May 6 22:14:37.629: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.629: INFO: Container coredns ready: true, restart count 1 May 6 22:14:37.629: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.629: INFO: Container nfd-controller ready: true, restart count 0 May 6 22:14:37.706: INFO: Latency metrics for node master3 May 6 22:14:37.706: INFO: Logging node info for node node1 May 6 22:14:37.709: INFO: Node Info: &Node{ObjectMeta:{node1 851b0a69-efd4-49b7-98ef-f0cfe2d311c6 45746 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:21:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:29 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:29 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:29 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:14:29 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:14:37.710: INFO: Logging kubelet events for node node1 May 6 22:14:37.712: INFO: Logging pods the kubelet thinks is on node node1 May 6 22:14:37.723: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:14:37.723: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:14:37.723: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:14:37.723: INFO: Init container install-cni ready: true, restart count 2 May 6 22:14:37.723: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:14:37.723: INFO: test-rs-4q5sb started at 2022-05-06 22:14:27 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container httpd ready: true, restart count 0 May 6 22:14:37.723: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:14:37.723: INFO: pod-logs-websocket-d9b5cd1b-5e75-4d97-a6d6-1825b17b6ab6 started at 2022-05-06 22:14:25 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container main ready: true, restart count 0 May 6 22:14:37.723: INFO: busybox-d589a3da-337d-49b5-914f-9d5a5b28f502 started at 2022-05-06 22:13:17 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container busybox ready: true, restart count 0 May 6 22:14:37.723: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container kube-multus ready: true, restart count 1 May 6 22:14:37.723: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:14:37.723: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:14:37.723: INFO: Container node-exporter ready: true, restart count 0 May 6 22:14:37.723: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:14:37.723: INFO: Container collectd ready: true, restart count 0 May 6 22:14:37.723: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:14:37.723: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:14:37.723: INFO: affinity-nodeport-transition-fthrd started at 2022-05-06 22:14:29 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 6 22:14:37.723: INFO: pod-projected-secrets-a60d042e-e2de-4205-88d7-9b8d694b850b started at 2022-05-06 22:14:06 +0000 UTC (0+3 container statuses recorded) May 6 22:14:37.723: INFO: Container creates-volume-test ready: true, restart count 0 May 6 22:14:37.723: INFO: Container dels-volume-test ready: true, restart count 0 May 6 22:14:37.723: INFO: Container upds-volume-test ready: true, restart count 0 May 6 22:14:37.723: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded) May 6 22:14:37.723: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:14:37.723: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:14:37.723: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded) May 6 22:14:37.723: INFO: Container config-reloader ready: true, restart count 0 May 6 22:14:37.723: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:14:37.723: INFO: Container grafana ready: true, restart count 0 May 6 22:14:37.723: INFO: Container prometheus ready: true, restart count 1 May 6 22:14:37.723: INFO: client-containers-410d29b6-e25a-4591-82ae-1dbf28816656 started at 2022-05-06 22:14:32 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container agnhost-container ready: true, restart count 0 May 6 22:14:37.723: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:14:37.723: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:14:37.723: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded) May 6 22:14:37.723: INFO: Container discover ready: false, restart count 0 May 6 22:14:37.723: INFO: Container init ready: false, restart count 0 May 6 22:14:37.723: INFO: Container install ready: false, restart count 0 May 6 22:14:37.723: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded) May 6 22:14:37.723: INFO: Container nodereport ready: true, restart count 0 May 6 22:14:37.723: INFO: Container reconcile ready: true, restart count 0 May 6 22:14:37.866: INFO: Latency metrics for node node1 May 6 22:14:37.866: INFO: Logging node info for node node2 May 6 22:14:37.868: INFO: Node Info: &Node{ObjectMeta:{node2 2dab2a66-f2eb-49db-9725-3dda82cede11 45897 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:34 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:34 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:14:34 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:14:34 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:14:37.869: INFO: Logging kubelet events for node node2 May 6 22:14:37.870: INFO: Logging pods the kubelet thinks is on node node2 May 6 22:14:38.770: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:14:38.770: INFO: test-rs-977b8 started at 2022-05-06 22:14:22 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container httpd ready: true, restart count 0 May 6 22:14:38.770: INFO: foo-p4fmd started at 2022-05-06 22:14:04 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container c ready: true, restart count 0 May 6 22:14:38.770: INFO: server-envvars-1c5d15c9-68b0-410c-8b56-e701a528d35b started at 2022-05-06 22:14:33 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container srv ready: false, restart count 0 May 6 22:14:38.770: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container kube-multus ready: true, restart count 1 May 6 22:14:38.770: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded) May 6 22:14:38.770: INFO: Container nodereport ready: true, restart count 0 May 6 22:14:38.770: INFO: Container reconcile ready: true, restart count 0 May 6 22:14:38.770: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container tas-extender ready: true, restart count 0 May 6 22:14:38.770: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:14:38.770: INFO: affinity-nodeport-transition-mclwf started at 2022-05-06 22:14:29 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 6 22:14:38.770: INFO: affinity-nodeport-transition-r6tgq started at 2022-05-06 22:14:29 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container affinity-nodeport-transition ready: true, restart count 0 May 6 22:14:38.770: INFO: busybox-scheduling-9577d56f-b5ef-449a-92a9-9bf0feea04df started at 2022-05-06 22:13:54 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container busybox-scheduling-9577d56f-b5ef-449a-92a9-9bf0feea04df ready: true, restart count 0 May 6 22:14:38.770: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:14:38.770: INFO: Container collectd ready: true, restart count 0 May 6 22:14:38.770: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:14:38.770: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:14:38.770: INFO: foo-25n42 started at 2022-05-06 22:14:04 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container c ready: true, restart count 0 May 6 22:14:38.770: INFO: test-rs-gj2nf started at 2022-05-06 22:14:27 +0000 UTC (0+2 container statuses recorded) May 6 22:14:38.770: INFO: Container httpd ready: true, restart count 0 May 6 22:14:38.770: INFO: Container test-rs ready: true, restart count 0 May 6 22:14:38.770: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded) May 6 22:14:38.770: INFO: Container discover ready: false, restart count 0 May 6 22:14:38.770: INFO: Container init ready: false, restart count 0 May 6 22:14:38.770: INFO: Container install ready: false, restart count 0 May 6 22:14:38.770: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:14:38.770: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:14:38.770: INFO: Container node-exporter ready: true, restart count 0 May 6 22:14:38.770: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:14:38.770: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:14:38.770: INFO: Init container install-cni ready: true, restart count 1 May 6 22:14:38.770: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:14:38.770: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:14:38.770: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:14:38.770: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:14:38.770: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:14:38.770: INFO: execpod-affinityhwsc7 started at 2022-05-06 22:14:35 +0000 UTC (0+1 container statuses recorded) May 6 22:14:38.770: INFO: Container agnhost-container ready: false, restart count 0 May 6 22:14:39.204: INFO: Latency metrics for node node2 May 6 22:14:39.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6974" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [146.096 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should have session affinity work for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:14:27.968: Unexpected error: <*errors.errorString | 0xc003c343d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32746 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32746 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":0,"skipped":7,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:39.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: starting the proxy server May 6 22:14:39.269: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9315 proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:39.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9315" for this suite. • ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":1,"skipped":18,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:39.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-9cb942eb-a359-4ba7-abc2-a3afb277f5ce STEP: Creating a pod to test consume secrets May 6 22:14:39.440: INFO: Waiting up to 5m0s for pod "pod-secrets-36947153-8c4a-49ef-9a51-92eae821a61e" in namespace "secrets-2928" to be "Succeeded or Failed" May 6 22:14:39.442: INFO: Pod "pod-secrets-36947153-8c4a-49ef-9a51-92eae821a61e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231271ms May 6 22:14:41.446: INFO: Pod "pod-secrets-36947153-8c4a-49ef-9a51-92eae821a61e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005662333s May 6 22:14:43.449: INFO: Pod "pod-secrets-36947153-8c4a-49ef-9a51-92eae821a61e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00880854s STEP: Saw pod success May 6 22:14:43.449: INFO: Pod "pod-secrets-36947153-8c4a-49ef-9a51-92eae821a61e" satisfied condition "Succeeded or Failed" May 6 22:14:43.451: INFO: Trying to get logs from node node1 pod pod-secrets-36947153-8c4a-49ef-9a51-92eae821a61e container secret-volume-test: STEP: delete the pod May 6 22:14:43.473: INFO: Waiting for pod pod-secrets-36947153-8c4a-49ef-9a51-92eae821a61e to disappear May 6 22:14:43.475: INFO: Pod pod-secrets-36947153-8c4a-49ef-9a51-92eae821a61e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:43.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2928" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":24,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:04.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6618, will wait for the garbage collector to delete the pods May 6 22:14:12.130: INFO: Deleting Job.batch foo took: 4.297736ms May 6 22:14:12.231: INFO: Terminating Job.batch foo pods took: 100.827862ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:45.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6618" for this suite. • [SLOW TEST:41.502 seconds] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":25,"skipped":272,"failed":0} S ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:45.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating an Endpoint STEP: waiting for available Endpoint STEP: listing all Endpoints STEP: updating the Endpoint STEP: fetching the Endpoint STEP: patching the Endpoint STEP: fetching the Endpoint STEP: deleting the Endpoint by Collection STEP: waiting for Endpoint deletion STEP: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:45.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1134" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • ------------------------------ {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":26,"skipped":273,"failed":0} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:33.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:14:33.912: INFO: The status of Pod server-envvars-1c5d15c9-68b0-410c-8b56-e701a528d35b is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:35.916: INFO: The status of Pod server-envvars-1c5d15c9-68b0-410c-8b56-e701a528d35b is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:37.917: INFO: The status of Pod server-envvars-1c5d15c9-68b0-410c-8b56-e701a528d35b is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:39.916: INFO: The status of Pod server-envvars-1c5d15c9-68b0-410c-8b56-e701a528d35b is Running (Ready = true) May 6 22:14:39.935: INFO: Waiting up to 5m0s for pod "client-envvars-22acf209-c89c-4961-b11d-9942f45723ac" in namespace "pods-6934" to be "Succeeded or Failed" May 6 22:14:39.937: INFO: Pod "client-envvars-22acf209-c89c-4961-b11d-9942f45723ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207238ms May 6 22:14:41.942: INFO: Pod "client-envvars-22acf209-c89c-4961-b11d-9942f45723ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007444925s May 6 22:14:43.945: INFO: Pod "client-envvars-22acf209-c89c-4961-b11d-9942f45723ac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010753742s May 6 22:14:45.949: INFO: Pod "client-envvars-22acf209-c89c-4961-b11d-9942f45723ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014214027s STEP: Saw pod success May 6 22:14:45.949: INFO: Pod "client-envvars-22acf209-c89c-4961-b11d-9942f45723ac" satisfied condition "Succeeded or Failed" May 6 22:14:45.951: INFO: Trying to get logs from node node2 pod client-envvars-22acf209-c89c-4961-b11d-9942f45723ac container env3cont: STEP: delete the pod May 6 22:14:45.966: INFO: Waiting for pod client-envvars-22acf209-c89c-4961-b11d-9942f45723ac to disappear May 6 22:14:45.967: INFO: Pod client-envvars-22acf209-c89c-4961-b11d-9942f45723ac no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:45.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6934" for this suite. • [SLOW TEST:12.099 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":626,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:38.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:14:44.373: INFO: Deleting pod "var-expansion-a74b3218-c1d3-474e-a8e1-b091cb1e1af9" in namespace "var-expansion-2289" May 6 22:14:44.378: INFO: Wait up to 5m0s for pod "var-expansion-a74b3218-c1d3-474e-a8e1-b091cb1e1af9" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:48.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2289" for this suite. • [SLOW TEST:10.062 seconds] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":27,"skipped":447,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:45.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:14:46.454: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:14:48.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472086, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472086, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472086, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472086, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:14:51.475: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:51.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1072" for this suite. STEP: Destroying namespace "webhook-1072-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.566 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":33,"skipped":631,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:48.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name secret-test-339aea5f-fd72-4044-b379-6745b2fbafd4 STEP: Creating a pod to test consume secrets May 6 22:14:48.483: INFO: Waiting up to 5m0s for pod "pod-secrets-7b00dfb0-2716-4bb3-87b7-461286266b8e" in namespace "secrets-5538" to be "Succeeded or Failed" May 6 22:14:48.485: INFO: Pod "pod-secrets-7b00dfb0-2716-4bb3-87b7-461286266b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 1.911753ms May 6 22:14:50.489: INFO: Pod "pod-secrets-7b00dfb0-2716-4bb3-87b7-461286266b8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005536012s May 6 22:14:52.493: INFO: Pod "pod-secrets-7b00dfb0-2716-4bb3-87b7-461286266b8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009693041s STEP: Saw pod success May 6 22:14:52.493: INFO: Pod "pod-secrets-7b00dfb0-2716-4bb3-87b7-461286266b8e" satisfied condition "Succeeded or Failed" May 6 22:14:52.496: INFO: Trying to get logs from node node1 pod pod-secrets-7b00dfb0-2716-4bb3-87b7-461286266b8e container secret-volume-test: STEP: delete the pod May 6 22:14:52.510: INFO: Waiting for pod pod-secrets-7b00dfb0-2716-4bb3-87b7-461286266b8e to disappear May 6 22:14:52.512: INFO: Pod pod-secrets-7b00dfb0-2716-4bb3-87b7-461286266b8e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:52.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5538" for this suite. STEP: Destroying namespace "secret-namespace-2250" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":462,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:51.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics May 6 22:14:52.645: INFO: The status of Pod kube-controller-manager-master3 is Running (Ready = true) May 6 22:14:52.804: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:14:52.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7576" for this suite. • ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":34,"skipped":635,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:43.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 STEP: create the container to handle the HTTPGet hook request. May 6 22:14:43.540: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:45.543: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:47.546: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the pod with lifecycle hook May 6 22:14:47.561: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:49.566: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:51.565: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) STEP: delete the pod with lifecycle hook May 6 22:14:51.572: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:14:51.574: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:14:53.575: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:14:53.578: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:14:55.576: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:14:55.580: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:14:57.578: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:14:57.580: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:14:59.576: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:14:59.579: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:15:01.577: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:15:01.582: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:15:03.575: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:15:03.579: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:15:05.575: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:15:05.578: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:15:07.576: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:15:07.579: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:07.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9937" for this suite. • [SLOW TEST:24.086 seconds] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":35,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:45.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-secret-5lrt STEP: Creating a pod to test atomic-volume-subpath May 6 22:14:45.683: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-5lrt" in namespace "subpath-2" to be "Succeeded or Failed" May 6 22:14:45.686: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.932253ms May 6 22:14:47.689: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006178975s May 6 22:14:49.692: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 4.00953468s May 6 22:14:51.696: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 6.01338666s May 6 22:14:53.699: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 8.016595262s May 6 22:14:55.705: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 10.021799828s May 6 22:14:57.708: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 12.025557666s May 6 22:14:59.711: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 14.028592953s May 6 22:15:01.716: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 16.033085298s May 6 22:15:03.720: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 18.036663573s May 6 22:15:05.723: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 20.040075609s May 6 22:15:07.726: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Running", Reason="", readiness=true. Elapsed: 22.043576785s May 6 22:15:09.730: INFO: Pod "pod-subpath-test-secret-5lrt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.047538562s STEP: Saw pod success May 6 22:15:09.731: INFO: Pod "pod-subpath-test-secret-5lrt" satisfied condition "Succeeded or Failed" May 6 22:15:09.733: INFO: Trying to get logs from node node2 pod pod-subpath-test-secret-5lrt container test-container-subpath-secret-5lrt: STEP: delete the pod May 6 22:15:09.748: INFO: Waiting for pod pod-subpath-test-secret-5lrt to disappear May 6 22:15:09.751: INFO: Pod pod-subpath-test-secret-5lrt no longer exists STEP: Deleting pod pod-subpath-test-secret-5lrt May 6 22:15:09.751: INFO: Deleting pod "pod-subpath-test-secret-5lrt" in namespace "subpath-2" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:09.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2" for this suite. • [SLOW TEST:24.116 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:07.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-467e50b7-a23e-4ae7-8d60-c73fab382bae STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:13.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5791" for this suite. • [SLOW TEST:6.075 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":65,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:16.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:14:16.034: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:17.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7461" for this suite. • [SLOW TEST:61.311 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":25,"skipped":469,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:52.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:20.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-483" for this suite. • [SLOW TEST:28.066 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":35,"skipped":637,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:17.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 6 22:15:17.459: INFO: Waiting up to 5m0s for pod "downward-api-5836c389-8408-41f2-b160-fa713f226af1" in namespace "downward-api-7541" to be "Succeeded or Failed" May 6 22:15:17.462: INFO: Pod "downward-api-5836c389-8408-41f2-b160-fa713f226af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324846ms May 6 22:15:19.465: INFO: Pod "downward-api-5836c389-8408-41f2-b160-fa713f226af1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005953849s May 6 22:15:21.469: INFO: Pod "downward-api-5836c389-8408-41f2-b160-fa713f226af1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009993933s STEP: Saw pod success May 6 22:15:21.470: INFO: Pod "downward-api-5836c389-8408-41f2-b160-fa713f226af1" satisfied condition "Succeeded or Failed" May 6 22:15:21.472: INFO: Trying to get logs from node node2 pod downward-api-5836c389-8408-41f2-b160-fa713f226af1 container dapi-container: STEP: delete the pod May 6 22:15:21.484: INFO: Waiting for pod downward-api-5836c389-8408-41f2-b160-fa713f226af1 to disappear May 6 22:15:21.486: INFO: Pod downward-api-5836c389-8408-41f2-b160-fa713f226af1 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:21.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7541" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":512,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:13.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3392.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3392.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3392.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3392.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:15:19.827: INFO: DNS probes using dns-test-aa3aa2b6-14e9-47c4-a45b-e00564e3b49d succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3392.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3392.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3392.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3392.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:15:25.866: INFO: DNS probes using dns-test-2ed7ade8-eb96-4ae7-be79-584965d6183c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3392.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3392.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3392.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3392.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:15:31.913: INFO: DNS probes using dns-test-d8de1d17-c624-4bab-8d65-da319e198d74 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:31.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3392" for this suite. • [SLOW TEST:18.176 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":5,"skipped":84,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:21.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:15:27.639: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local from pod dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08: the server could not find the requested resource (get pods dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08) May 6 22:15:27.642: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local from pod dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08: the server could not find the requested resource (get pods dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08) May 6 22:15:27.644: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5.svc.cluster.local from pod dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08: the server could not find the requested resource (get pods dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08) May 6 22:15:27.648: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5.svc.cluster.local from pod dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08: the server could not find the requested resource (get pods dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08) May 6 22:15:27.656: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local from pod dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08: the server could not find the requested resource (get pods dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08) May 6 22:15:27.660: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local from pod dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08: the server could not find the requested resource (get pods dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08) May 6 22:15:27.662: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5.svc.cluster.local from pod dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08: the server could not find the requested resource (get pods dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08) May 6 22:15:27.665: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5.svc.cluster.local from pod dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08: the server could not find the requested resource (get pods dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08) May 6 22:15:27.670: INFO: Lookups using dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5.svc.cluster.local jessie_udp@dns-test-service-2.dns-5.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5.svc.cluster.local] May 6 22:15:32.706: INFO: DNS probes using dns-5/dns-test-eacd0a38-48ed-4ed7-a233-86338b65bf08 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:32.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5" for this suite. • [SLOW TEST:11.150 seconds] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":27,"skipped":557,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:32.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Create set of pods May 6 22:15:32.799: INFO: created test-pod-1 May 6 22:15:32.813: INFO: created test-pod-2 May 6 22:15:32.827: INFO: created test-pod-3 STEP: waiting for all 3 pods to be located STEP: waiting for all pods to be deleted [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:32.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1072" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":28,"skipped":576,"failed":0} SSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:31.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:15:31.996: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes May 6 22:15:32.009: INFO: The status of Pod pod-exec-websocket-d99a9877-c397-498d-b071-82f3b7755e7e is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:34.012: INFO: The status of Pod pod-exec-websocket-d99a9877-c397-498d-b071-82f3b7755e7e is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:36.012: INFO: The status of Pod pod-exec-websocket-d99a9877-c397-498d-b071-82f3b7755e7e is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:36.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8300" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":104,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:09.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-7077 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 22:15:09.853: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 6 22:15:09.887: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:11.890: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:13.891: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:15.892: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:17.891: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:19.892: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:21.891: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:23.891: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:25.891: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:27.892: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:29.891: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:31.895: INFO: The status of Pod netserver-0 is Running (Ready = true) May 6 22:15:31.910: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 6 22:15:35.945: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 6 22:15:35.945: INFO: Going to poll 10.244.3.58 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 6 22:15:35.948: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.3.58 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7077 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:15:35.948: INFO: >>> kubeConfig: /root/.kube/config May 6 22:15:37.041: INFO: Found all 1 expected endpoints: [netserver-0] May 6 22:15:37.042: INFO: Going to poll 10.244.4.124 on port 8081 at least 0 times, with a maximum of 34 tries before failing May 6 22:15:37.044: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.4.124 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7077 PodName:host-test-container-pod ContainerName:agnhost-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:15:37.044: INFO: >>> kubeConfig: /root/.kube/config May 6 22:15:39.037: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:39.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7077" for this suite. • [SLOW TEST:29.220 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:06.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-26f857ae-74b4-4a96-a3ea-2cd65ef3b544 STEP: Creating secret with name s-test-opt-upd-8ade75ff-1a50-4594-93c0-0605bd5b059a STEP: Creating the pod May 6 22:14:06.387: INFO: The status of Pod pod-projected-secrets-a60d042e-e2de-4205-88d7-9b8d694b850b is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:08.390: INFO: The status of Pod pod-projected-secrets-a60d042e-e2de-4205-88d7-9b8d694b850b is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:10.390: INFO: The status of Pod pod-projected-secrets-a60d042e-e2de-4205-88d7-9b8d694b850b is Pending, waiting for it to be Running (with Ready = true) May 6 22:14:12.392: INFO: The status of Pod pod-projected-secrets-a60d042e-e2de-4205-88d7-9b8d694b850b is Running (Ready = true) STEP: Deleting secret s-test-opt-del-26f857ae-74b4-4a96-a3ea-2cd65ef3b544 STEP: Updating secret s-test-opt-upd-8ade75ff-1a50-4594-93c0-0605bd5b059a STEP: Creating secret with name s-test-opt-create-3133bfcb-f42b-449b-ae83-ec43bb6dc9de STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:43.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4766" for this suite. • [SLOW TEST:97.382 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":696,"failed":0} SSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:32.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:43.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6640" for this suite. • [SLOW TEST:11.066 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":29,"skipped":583,"failed":0} SS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:20.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-6410 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 22:15:20.912: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 6 22:15:20.942: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:22.945: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:24.947: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:26.947: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:28.947: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:30.947: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:32.946: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:34.946: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:36.947: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:38.947: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:40.948: INFO: The status of Pod netserver-0 is Running (Ready = true) May 6 22:15:40.952: INFO: The status of Pod netserver-1 is Running (Ready = false) May 6 22:15:42.955: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 6 22:15:46.977: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 6 22:15:46.977: INFO: Breadth first check of 10.244.3.61 on host 10.10.190.207... May 6 22:15:46.980: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.130:9080/dial?request=hostname&protocol=http&host=10.244.3.61&port=8080&tries=1'] Namespace:pod-network-test-6410 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:15:46.980: INFO: >>> kubeConfig: /root/.kube/config May 6 22:15:47.341: INFO: Waiting for responses: map[] May 6 22:15:47.341: INFO: reached 10.244.3.61 after 0/1 tries May 6 22:15:47.341: INFO: Breadth first check of 10.244.4.126 on host 10.10.190.208... May 6 22:15:47.343: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.4.130:9080/dial?request=hostname&protocol=http&host=10.244.4.126&port=8080&tries=1'] Namespace:pod-network-test-6410 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:15:47.344: INFO: >>> kubeConfig: /root/.kube/config May 6 22:15:47.449: INFO: Waiting for responses: map[] May 6 22:15:47.449: INFO: reached 10.244.4.126 after 0/1 tries May 6 22:15:47.449: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:47.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6410" for this suite. • [SLOW TEST:26.569 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":638,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:39.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 6 22:15:39.114: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:49.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-242" for this suite. • [SLOW TEST:10.103 seconds] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":337,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:43.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name projected-secret-test-db371ebd-7e0a-4daa-8c45-d94f98d3fef4 STEP: Creating a pod to test consume secrets May 6 22:15:43.982: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74" in namespace "projected-1637" to be "Succeeded or Failed" May 6 22:15:43.984: INFO: Pod "pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324784ms May 6 22:15:45.987: INFO: Pod "pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005108653s May 6 22:15:47.990: INFO: Pod "pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74": Phase="Pending", Reason="", readiness=false. Elapsed: 4.008202912s May 6 22:15:49.994: INFO: Pod "pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012485116s STEP: Saw pod success May 6 22:15:49.994: INFO: Pod "pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74" satisfied condition "Succeeded or Failed" May 6 22:15:49.997: INFO: Trying to get logs from node node2 pod pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74 container secret-volume-test: STEP: delete the pod May 6 22:15:50.014: INFO: Waiting for pod pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74 to disappear May 6 22:15:50.016: INFO: Pod pod-projected-secrets-f09263b1-1adb-4dff-9c5b-60dff2368c74 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:50.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1637" for this suite. • [SLOW TEST:6.082 seconds] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":585,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:43.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:15:44.354: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:15:46.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472144, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472144, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:15:48.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472144, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472144, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472144, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:15:51.374: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:51.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6283" for this suite. STEP: Destroying namespace "webhook-6283-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.683 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":38,"skipped":709,"failed":0} S ------------------------------ [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:50.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:15:50.124: INFO: The status of Pod pod-secrets-24e687db-5c12-4191-a279-02b5e8a5fb6f is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:52.128: INFO: The status of Pod pod-secrets-24e687db-5c12-4191-a279-02b5e8a5fb6f is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:54.129: INFO: The status of Pod pod-secrets-24e687db-5c12-4191-a279-02b5e8a5fb6f is Running (Ready = true) STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:54.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7535" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":31,"skipped":608,"failed":0} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:54.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:15:54.226: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be1355e4-d167-498f-ac75-cf3251d9a533" in namespace "downward-api-8394" to be "Succeeded or Failed" May 6 22:15:54.229: INFO: Pod "downwardapi-volume-be1355e4-d167-498f-ac75-cf3251d9a533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621001ms May 6 22:15:56.234: INFO: Pod "downwardapi-volume-be1355e4-d167-498f-ac75-cf3251d9a533": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007888686s May 6 22:15:58.238: INFO: Pod "downwardapi-volume-be1355e4-d167-498f-ac75-cf3251d9a533": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011510924s STEP: Saw pod success May 6 22:15:58.238: INFO: Pod "downwardapi-volume-be1355e4-d167-498f-ac75-cf3251d9a533" satisfied condition "Succeeded or Failed" May 6 22:15:58.241: INFO: Trying to get logs from node node2 pod downwardapi-volume-be1355e4-d167-498f-ac75-cf3251d9a533 container client-container: STEP: delete the pod May 6 22:15:58.258: INFO: Waiting for pod downwardapi-volume-be1355e4-d167-498f-ac75-cf3251d9a533 to disappear May 6 22:15:58.260: INFO: Pod downwardapi-volume-be1355e4-d167-498f-ac75-cf3251d9a533 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:15:58.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8394" for this suite. • ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:51.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:15:51.480: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 6 22:15:56.484: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 22:15:56.484: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 6 22:15:58.487: INFO: Creating deployment "test-rollover-deployment" May 6 22:15:58.494: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 6 22:16:00.500: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 6 22:16:00.507: INFO: Ensure that both replica sets have 1 created replica May 6 22:16:00.513: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 6 22:16:00.522: INFO: Updating deployment test-rollover-deployment May 6 22:16:00.522: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 6 22:16:02.528: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 6 22:16:02.532: INFO: Make sure deployment "test-rollover-deployment" is complete May 6 22:16:02.537: INFO: all replica sets need to contain the pod-template-hash label May 6 22:16:02.537: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472160, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:16:04.545: INFO: all replica sets need to contain the pod-template-hash label May 6 22:16:04.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472163, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:16:06.544: INFO: all replica sets need to contain the pod-template-hash label May 6 22:16:06.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472163, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:16:08.547: INFO: all replica sets need to contain the pod-template-hash label May 6 22:16:08.547: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472163, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:16:10.546: INFO: all replica sets need to contain the pod-template-hash label May 6 22:16:10.546: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472163, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:16:12.551: INFO: all replica sets need to contain the pod-template-hash label May 6 22:16:12.551: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472163, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472158, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:16:14.548: INFO: May 6 22:16:14.548: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 6 22:16:14.556: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9666 8323686f-6f07-4d6f-915d-d3a83fef1893 47821 2 2022-05-06 22:15:58 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-05-06 22:16:00 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-06 22:16:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004738f28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-06 22:15:58 +0000 UTC,LastTransitionTime:2022-05-06 22:15:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2022-05-06 22:16:13 +0000 UTC,LastTransitionTime:2022-05-06 22:15:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 22:16:14.560: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-9666 029490fe-b434-43a7-9b27-5b675572c494 47810 2 2022-05-06 22:16:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 8323686f-6f07-4d6f-915d-d3a83fef1893 0xc00523b2e0 0xc00523b2e1}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:16:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8323686f-6f07-4d6f-915d-d3a83fef1893\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00523b358 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 22:16:14.560: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 6 22:16:14.560: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9666 6ebe6668-3732-4e13-a95a-1536f5857394 47820 2 2022-05-06 22:15:51 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 8323686f-6f07-4d6f-915d-d3a83fef1893 0xc00523b0c7 0xc00523b0c8}] [] [{e2e.test Update apps/v1 2022-05-06 22:15:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-06 22:16:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8323686f-6f07-4d6f-915d-d3a83fef1893\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00523b168 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 22:16:14.560: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9666 5e474a48-d9e7-4878-b165-3509e1dd0186 47720 2 2022-05-06 22:15:58 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 8323686f-6f07-4d6f-915d-d3a83fef1893 0xc00523b1d7 0xc00523b1d8}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:16:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8323686f-6f07-4d6f-915d-d3a83fef1893\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00523b278 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 22:16:14.563: INFO: Pod "test-rollover-deployment-98c5f4599-cchn5" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-cchn5 test-rollover-deployment-98c5f4599- deployment-9666 d16b1d0b-b548-4c23-b4e0-e43390dec066 47762 0 2022-05-06 22:16:00 +0000 UTC map[name:rollover-pod pod-template-hash:98c5f4599] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.139" ], "mac": "2e:0e:0c:c5:da:2e", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.139" ], "mac": "2e:0e:0c:c5:da:2e", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 029490fe-b434-43a7-9b27-5b675572c494 0xc0035758df 0xc0035758f0}] [] [{kube-controller-manager Update v1 2022-05-06 22:16:00 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"029490fe-b434-43a7-9b27-5b675572c494\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:16:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:16:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.139\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qk4lj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qk4lj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:16:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:16:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:16:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:16:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.139,StartTime:2022-05-06 22:16:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:16:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://db320e760b856f4e6a6ef94c347e09061d9aad76536e7f046c87fc30250e0549,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.139,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:14.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9666" for this suite. • [SLOW TEST:23.136 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":39,"skipped":710,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:49.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Performing setup for networking test in namespace pod-network-test-756 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 22:15:49.260: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 6 22:15:49.295: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:51.300: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 6 22:15:53.332: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:55.300: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:57.301: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:15:59.299: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:16:01.302: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:16:03.299: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:16:05.302: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:16:07.299: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:16:09.301: INFO: The status of Pod netserver-0 is Running (Ready = false) May 6 22:16:11.301: INFO: The status of Pod netserver-0 is Running (Ready = true) May 6 22:16:11.307: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 6 22:16:15.329: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 May 6 22:16:15.329: INFO: Breadth first check of 10.244.3.68 on host 10.10.190.207... May 6 22:16:15.332: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.71:9080/dial?request=hostname&protocol=udp&host=10.244.3.68&port=8081&tries=1'] Namespace:pod-network-test-756 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:16:15.332: INFO: >>> kubeConfig: /root/.kube/config May 6 22:16:15.423: INFO: Waiting for responses: map[] May 6 22:16:15.423: INFO: reached 10.244.3.68 after 0/1 tries May 6 22:16:15.423: INFO: Breadth first check of 10.244.4.135 on host 10.10.190.208... May 6 22:16:15.426: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.3.71:9080/dial?request=hostname&protocol=udp&host=10.244.4.135&port=8081&tries=1'] Namespace:pod-network-test-756 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} May 6 22:16:15.426: INFO: >>> kubeConfig: /root/.kube/config May 6 22:16:15.515: INFO: Waiting for responses: map[] May 6 22:16:15.516: INFO: reached 10.244.4.135 after 0/1 tries May 6 22:16:15.516: INFO: Going to retry 0 out of 2 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:15.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-756" for this suite. • [SLOW TEST:26.299 seconds] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23 Granular Checks: Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30 should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":351,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:15.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: getting /apis STEP: getting /apis/discovery.k8s.io STEP: getting /apis/discovery.k8s.iov1 STEP: creating STEP: getting STEP: listing STEP: watching May 6 22:16:15.639: INFO: starting watch STEP: cluster-wide listing STEP: cluster-wide watching May 6 22:16:15.643: INFO: starting watch STEP: patching STEP: updating May 6 22:16:15.652: INFO: waiting for watch events with expected annotations May 6 22:16:15.652: INFO: saw patched and updated annotations STEP: deleting STEP: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:15.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-143" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":31,"skipped":385,"failed":0} SSS ------------------------------ [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:47.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslice STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: referencing a single matching pod STEP: referencing matching pods with named port STEP: creating empty Endpoints and EndpointSlices for no matching Pods STEP: recreating EndpointSlices after they've been deleted May 6 22:16:07.602: INFO: EndpointSlice for Service endpointslice-9287/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:17.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslice-9287" for this suite. • [SLOW TEST:30.143 seconds] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":37,"skipped":648,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} S ------------------------------ [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:15.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:186 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating pod May 6 22:16:15.734: INFO: The status of Pod pod-hostip-91020857-76ba-4cea-bfdd-0dbbc708e38c is Pending, waiting for it to be Running (with Ready = true) May 6 22:16:17.738: INFO: The status of Pod pod-hostip-91020857-76ba-4cea-bfdd-0dbbc708e38c is Pending, waiting for it to be Running (with Ready = true) May 6 22:16:19.738: INFO: The status of Pod pod-hostip-91020857-76ba-4cea-bfdd-0dbbc708e38c is Running (Ready = true) May 6 22:16:19.742: INFO: Pod pod-hostip-91020857-76ba-4cea-bfdd-0dbbc708e38c has hostIP: 10.10.190.208 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:19.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-124" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":388,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:36.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-9783 STEP: creating service affinity-clusterip-transition in namespace services-9783 STEP: creating replication controller affinity-clusterip-transition in namespace services-9783 I0506 22:15:36.154416 37 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9783, replica count: 3 I0506 22:15:39.205582 37 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:15:42.207582 37 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:15:42.212: INFO: Creating new exec pod May 6 22:15:47.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9783 exec execpod-affinityhz6vq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' May 6 22:15:47.520: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" May 6 22:15:47.520: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:15:47.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9783 exec execpod-affinityhz6vq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.34.222 80' May 6 22:15:47.751: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.34.222 80\nConnection to 10.233.34.222 80 port [tcp/http] succeeded!\n" May 6 22:15:47.751: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:15:47.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9783 exec execpod-affinityhz6vq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.34.222:80/ ; done' May 6 22:15:48.074: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n" May 6 22:15:48.074: INFO: stdout: "\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2" May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:15:48.074: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9783 exec execpod-affinityhz6vq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.34.222:80/ ; done' May 6 22:16:18.405: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n" May 6 22:16:18.406: INFO: stdout: "\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-j6hvw\naffinity-clusterip-transition-j6hvw\naffinity-clusterip-transition-gndsz\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-gndsz\naffinity-clusterip-transition-gndsz\naffinity-clusterip-transition-gndsz\naffinity-clusterip-transition-gndsz\naffinity-clusterip-transition-gndsz\naffinity-clusterip-transition-j6hvw\naffinity-clusterip-transition-gndsz\naffinity-clusterip-transition-gndsz\naffinity-clusterip-transition-2hsl2" May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-j6hvw May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-j6hvw May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-gndsz May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-gndsz May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-gndsz May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-gndsz May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-gndsz May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-gndsz May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-j6hvw May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-gndsz May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-gndsz May 6 22:16:18.406: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-9783 exec execpod-affinityhz6vq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.233.34.222:80/ ; done' May 6 22:16:18.864: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.233.34.222:80/\n" May 6 22:16:18.864: INFO: stdout: "\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2\naffinity-clusterip-transition-2hsl2" May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.864: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.865: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.865: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.865: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.865: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.865: INFO: Received response from host: affinity-clusterip-transition-2hsl2 May 6 22:16:18.865: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-clusterip-transition in namespace services-9783, will wait for the garbage collector to delete the pods May 6 22:16:18.928: INFO: Deleting ReplicationController affinity-clusterip-transition took: 3.950059ms May 6 22:16:19.029: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.652059ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:26.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9783" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • [SLOW TEST:50.727 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":114,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:19.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-upd-d97ef5e2-574e-4dd8-88d7-01fdf85c8b2d STEP: Creating the pod May 6 22:16:19.855: INFO: The status of Pod pod-configmaps-c488875b-08fa-4887-b502-41efa62c5d8f is Pending, waiting for it to be Running (with Ready = true) May 6 22:16:21.859: INFO: The status of Pod pod-configmaps-c488875b-08fa-4887-b502-41efa62c5d8f is Pending, waiting for it to be Running (with Ready = true) May 6 22:16:23.859: INFO: The status of Pod pod-configmaps-c488875b-08fa-4887-b502-41efa62c5d8f is Pending, waiting for it to be Running (with Ready = true) May 6 22:16:25.859: INFO: The status of Pod pod-configmaps-c488875b-08fa-4887-b502-41efa62c5d8f is Running (Ready = true) STEP: Updating configmap configmap-test-upd-d97ef5e2-574e-4dd8-88d7-01fdf85c8b2d STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:27.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5852" for this suite. • [SLOW TEST:8.080 seconds] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":417,"failed":0} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:27.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7885.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7885.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7885.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7885.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7885.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7885.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:16:31.963: INFO: DNS probes using dns-7885/dns-test-6cf93f3f-8962-4bd9-b181-5e1a5d5fcd2f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:31.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7885" for this suite. • ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":34,"skipped":417,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:32.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context STEP: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser May 6 22:16:32.080: INFO: Waiting up to 5m0s for pod "security-context-237d16ca-4268-4cf3-814d-eceb3cd37051" in namespace "security-context-7793" to be "Succeeded or Failed" May 6 22:16:32.082: INFO: Pod "security-context-237d16ca-4268-4cf3-814d-eceb3cd37051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.730748ms May 6 22:16:34.087: INFO: Pod "security-context-237d16ca-4268-4cf3-814d-eceb3cd37051": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007149381s May 6 22:16:36.092: INFO: Pod "security-context-237d16ca-4268-4cf3-814d-eceb3cd37051": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012007775s STEP: Saw pod success May 6 22:16:36.092: INFO: Pod "security-context-237d16ca-4268-4cf3-814d-eceb3cd37051" satisfied condition "Succeeded or Failed" May 6 22:16:36.095: INFO: Trying to get logs from node node1 pod security-context-237d16ca-4268-4cf3-814d-eceb3cd37051 container test-container: STEP: delete the pod May 6 22:16:36.108: INFO: Waiting for pod security-context-237d16ca-4268-4cf3-814d-eceb3cd37051 to disappear May 6 22:16:36.109: INFO: Pod security-context-237d16ca-4268-4cf3-814d-eceb3cd37051 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:36.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-7793" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":35,"skipped":444,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:36.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap with name configmap-test-volume-cebfba06-7f14-45a3-ad6d-3b58901d599f STEP: Creating a pod to test consume configMaps May 6 22:16:36.215: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4b4d2cd-68b8-432e-84e4-bd9423ce96da" in namespace "configmap-9405" to be "Succeeded or Failed" May 6 22:16:36.218: INFO: Pod "pod-configmaps-d4b4d2cd-68b8-432e-84e4-bd9423ce96da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339504ms May 6 22:16:38.221: INFO: Pod "pod-configmaps-d4b4d2cd-68b8-432e-84e4-bd9423ce96da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005665312s May 6 22:16:40.226: INFO: Pod "pod-configmaps-d4b4d2cd-68b8-432e-84e4-bd9423ce96da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010327654s STEP: Saw pod success May 6 22:16:40.226: INFO: Pod "pod-configmaps-d4b4d2cd-68b8-432e-84e4-bd9423ce96da" satisfied condition "Succeeded or Failed" May 6 22:16:40.228: INFO: Trying to get logs from node node1 pod pod-configmaps-d4b4d2cd-68b8-432e-84e4-bd9423ce96da container agnhost-container: STEP: delete the pod May 6 22:16:40.244: INFO: Waiting for pod pod-configmaps-d4b4d2cd-68b8-432e-84e4-bd9423ce96da to disappear May 6 22:16:40.246: INFO: Pod pod-configmaps-d4b4d2cd-68b8-432e-84e4-bd9423ce96da no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:40.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9405" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":472,"failed":0} SS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:40.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:16:40.299: INFO: Waiting up to 5m0s for pod "busybox-user-65534-8b6622cb-d444-4f3c-8177-59ec6099689c" in namespace "security-context-test-8794" to be "Succeeded or Failed" May 6 22:16:40.302: INFO: Pod "busybox-user-65534-8b6622cb-d444-4f3c-8177-59ec6099689c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.917777ms May 6 22:16:42.305: INFO: Pod "busybox-user-65534-8b6622cb-d444-4f3c-8177-59ec6099689c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00617349s May 6 22:16:44.309: INFO: Pod "busybox-user-65534-8b6622cb-d444-4f3c-8177-59ec6099689c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010651858s May 6 22:16:46.313: INFO: Pod "busybox-user-65534-8b6622cb-d444-4f3c-8177-59ec6099689c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014703046s May 6 22:16:46.314: INFO: Pod "busybox-user-65534-8b6622cb-d444-4f3c-8177-59ec6099689c" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:46.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8794" for this suite. • [SLOW TEST:6.060 seconds] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 When creating a container with runAsUser /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:50 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":474,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:26.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1548 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 May 6 22:16:26.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5055 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --labels=run=e2e-test-httpd-pod' May 6 22:16:27.016: INFO: stderr: "" May 6 22:16:27.016: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 6 22:16:32.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5055 get pod e2e-test-httpd-pod -o json' May 6 22:16:32.227: INFO: stderr: "" May 6 22:16:32.227: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"k8s.v1.cni.cncf.io/network-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.146\\\"\\n ],\\n \\\"mac\\\": \\\"d6:78:4f:d4:88:13\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"k8s.v1.cni.cncf.io/networks-status\": \"[{\\n \\\"name\\\": \\\"default-cni-network\\\",\\n \\\"interface\\\": \\\"eth0\\\",\\n \\\"ips\\\": [\\n \\\"10.244.4.146\\\"\\n ],\\n \\\"mac\\\": \\\"d6:78:4f:d4:88:13\\\",\\n \\\"default\\\": true,\\n \\\"dns\\\": {}\\n}]\",\n \"kubernetes.io/psp\": \"collectd\"\n },\n \"creationTimestamp\": \"2022-05-06T22:16:26Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5055\",\n \"resourceVersion\": \"48211\",\n \"uid\": \"33a2a3c6-9e3b-44b6-ac21-9d6231a563d9\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"Always\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-vk6cs\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"node2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-vk6cs\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-06T22:16:26Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-06T22:16:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-06T22:16:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-05-06T22:16:26Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"docker://5cc202b58311cd74e71a8d317a199e0b20a1fa3afcf81a6b8d2bfc76f9031228\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"docker-pullable://k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-05-06T22:16:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.10.190.208\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.4.146\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.4.146\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-05-06T22:16:26Z\"\n }\n}\n" STEP: replace the image in the pod May 6 22:16:32.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5055 replace -f -' May 6 22:16:32.598: INFO: stderr: "" May 6 22:16:32.598: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1552 May 6 22:16:32.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5055 delete pods e2e-test-httpd-pod' May 6 22:16:46.844: INFO: stderr: "" May 6 22:16:46.844: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:46.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5055" for this suite. • [SLOW TEST:19.999 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545 should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":8,"skipped":116,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} S ------------------------------ [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:17.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:241 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:293 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a replication controller May 6 22:16:17.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 create -f -' May 6 22:16:18.068: INFO: stderr: "" May 6 22:16:18.068: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 22:16:18.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:16:18.246: INFO: stderr: "" May 6 22:16:18.246: INFO: stdout: "update-demo-nautilus-f567k update-demo-nautilus-zhkn2 " May 6 22:16:18.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-f567k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:16:18.422: INFO: stderr: "" May 6 22:16:18.422: INFO: stdout: "" May 6 22:16:18.422: INFO: update-demo-nautilus-f567k is created but not running May 6 22:16:23.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:16:23.592: INFO: stderr: "" May 6 22:16:23.592: INFO: stdout: "update-demo-nautilus-f567k update-demo-nautilus-zhkn2 " May 6 22:16:23.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-f567k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:16:23.778: INFO: stderr: "" May 6 22:16:23.778: INFO: stdout: "" May 6 22:16:23.778: INFO: update-demo-nautilus-f567k is created but not running May 6 22:16:28.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:16:28.961: INFO: stderr: "" May 6 22:16:28.961: INFO: stdout: "update-demo-nautilus-f567k update-demo-nautilus-zhkn2 " May 6 22:16:28.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-f567k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:16:29.138: INFO: stderr: "" May 6 22:16:29.138: INFO: stdout: "true" May 6 22:16:29.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-f567k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 6 22:16:29.320: INFO: stderr: "" May 6 22:16:29.320: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 6 22:16:29.320: INFO: validating pod update-demo-nautilus-f567k May 6 22:16:29.325: INFO: got data: { "image": "nautilus.jpg" } May 6 22:16:29.325: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:16:29.325: INFO: update-demo-nautilus-f567k is verified up and running May 6 22:16:29.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-zhkn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:16:29.497: INFO: stderr: "" May 6 22:16:29.497: INFO: stdout: "true" May 6 22:16:29.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-zhkn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 6 22:16:29.662: INFO: stderr: "" May 6 22:16:29.662: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 6 22:16:29.662: INFO: validating pod update-demo-nautilus-zhkn2 May 6 22:16:29.666: INFO: got data: { "image": "nautilus.jpg" } May 6 22:16:29.666: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:16:29.666: INFO: update-demo-nautilus-zhkn2 is verified up and running STEP: scaling down the replication controller May 6 22:16:29.675: INFO: scanned /root for discovery docs: May 6 22:16:29.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 scale rc update-demo-nautilus --replicas=1 --timeout=5m' May 6 22:16:29.884: INFO: stderr: "" May 6 22:16:29.884: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 22:16:29.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:16:30.071: INFO: stderr: "" May 6 22:16:30.071: INFO: stdout: "update-demo-nautilus-f567k update-demo-nautilus-zhkn2 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 22:16:35.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:16:35.254: INFO: stderr: "" May 6 22:16:35.254: INFO: stdout: "update-demo-nautilus-f567k update-demo-nautilus-zhkn2 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 22:16:40.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:16:40.437: INFO: stderr: "" May 6 22:16:40.437: INFO: stdout: "update-demo-nautilus-zhkn2 " May 6 22:16:40.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-zhkn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:16:40.607: INFO: stderr: "" May 6 22:16:40.607: INFO: stdout: "true" May 6 22:16:40.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-zhkn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 6 22:16:40.779: INFO: stderr: "" May 6 22:16:40.779: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 6 22:16:40.779: INFO: validating pod update-demo-nautilus-zhkn2 May 6 22:16:40.781: INFO: got data: { "image": "nautilus.jpg" } May 6 22:16:40.781: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:16:40.781: INFO: update-demo-nautilus-zhkn2 is verified up and running STEP: scaling up the replication controller May 6 22:16:40.791: INFO: scanned /root for discovery docs: May 6 22:16:40.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 scale rc update-demo-nautilus --replicas=2 --timeout=5m' May 6 22:16:41.012: INFO: stderr: "" May 6 22:16:41.012: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 22:16:41.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:16:41.178: INFO: stderr: "" May 6 22:16:41.178: INFO: stdout: "update-demo-nautilus-k8nw4 update-demo-nautilus-zhkn2 " May 6 22:16:41.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-k8nw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:16:41.347: INFO: stderr: "" May 6 22:16:41.347: INFO: stdout: "" May 6 22:16:41.347: INFO: update-demo-nautilus-k8nw4 is created but not running May 6 22:16:46.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' May 6 22:16:46.519: INFO: stderr: "" May 6 22:16:46.519: INFO: stdout: "update-demo-nautilus-k8nw4 update-demo-nautilus-zhkn2 " May 6 22:16:46.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-k8nw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:16:46.665: INFO: stderr: "" May 6 22:16:46.665: INFO: stdout: "true" May 6 22:16:46.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-k8nw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 6 22:16:46.818: INFO: stderr: "" May 6 22:16:46.818: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 6 22:16:46.818: INFO: validating pod update-demo-nautilus-k8nw4 May 6 22:16:46.821: INFO: got data: { "image": "nautilus.jpg" } May 6 22:16:46.821: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:16:46.821: INFO: update-demo-nautilus-k8nw4 is verified up and running May 6 22:16:46.821: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-zhkn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' May 6 22:16:46.993: INFO: stderr: "" May 6 22:16:46.993: INFO: stdout: "true" May 6 22:16:46.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods update-demo-nautilus-zhkn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' May 6 22:16:47.164: INFO: stderr: "" May 6 22:16:47.164: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" May 6 22:16:47.164: INFO: validating pod update-demo-nautilus-zhkn2 May 6 22:16:47.168: INFO: got data: { "image": "nautilus.jpg" } May 6 22:16:47.168: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:16:47.168: INFO: update-demo-nautilus-zhkn2 is verified up and running STEP: using delete to clean up resources May 6 22:16:47.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 delete --grace-period=0 --force -f -' May 6 22:16:47.306: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:16:47.306: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 22:16:47.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get rc,svc -l name=update-demo --no-headers' May 6 22:16:47.513: INFO: stderr: "No resources found in kubectl-454 namespace.\n" May 6 22:16:47.513: INFO: stdout: "" May 6 22:16:47.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-454 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 22:16:47.669: INFO: stderr: "" May 6 22:16:47.669: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:47.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-454" for this suite. • [SLOW TEST:30.053 seconds] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291 should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":38,"skipped":649,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:47.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating configMap that has name configmap-test-emptyKey-1f5da040-dcea-4753-8e16-d9bf4f45087a [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:47.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-598" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":39,"skipped":732,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:47.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a ServiceAccount STEP: watching for the ServiceAccount to be added STEP: patching the ServiceAccount STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) STEP: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:47.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-311" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":40,"skipped":737,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} SSS ------------------------------ [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:46.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 22:16:51.913: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:16:51.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1486" for this suite. • [SLOW TEST:5.094 seconds] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 blackbox test /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41 on terminated container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:134 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":117,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:29.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating service in namespace services-6314 STEP: creating service affinity-nodeport-transition in namespace services-6314 STEP: creating replication controller affinity-nodeport-transition in namespace services-6314 I0506 22:14:29.686914 32 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6314, replica count: 3 I0506 22:14:32.737823 32 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:14:35.738096 32 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:14:35.748: INFO: Creating new exec pod May 6 22:14:42.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' May 6 22:14:43.122: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" May 6 22:14:43.122: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:14:43.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.21.243 80' May 6 22:14:43.559: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.21.243 80\nConnection to 10.233.21.243 80 port [tcp/http] succeeded!\n" May 6 22:14:43.559: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" May 6 22:14:43.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:43.799: INFO: rc: 1 May 6 22:14:43.799: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:44.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:45.167: INFO: rc: 1 May 6 22:14:45.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:45.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:46.032: INFO: rc: 1 May 6 22:14:46.032: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:46.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:47.089: INFO: rc: 1 May 6 22:14:47.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:47.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:48.186: INFO: rc: 1 May 6 22:14:48.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:48.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:49.066: INFO: rc: 1 May 6 22:14:49.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:49.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:50.029: INFO: rc: 1 May 6 22:14:50.029: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30167 + echo hostName nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:50.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:51.056: INFO: rc: 1 May 6 22:14:51.056: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:51.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:52.052: INFO: rc: 1 May 6 22:14:52.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:52.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:53.524: INFO: rc: 1 May 6 22:14:53.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:53.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:54.478: INFO: rc: 1 May 6 22:14:54.478: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:54.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:55.181: INFO: rc: 1 May 6 22:14:55.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:55.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:56.105: INFO: rc: 1 May 6 22:14:56.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:56.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:57.057: INFO: rc: 1 May 6 22:14:57.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:57.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:58.136: INFO: rc: 1 May 6 22:14:58.136: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:58.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:14:59.072: INFO: rc: 1 May 6 22:14:59.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:14:59.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:00.416: INFO: rc: 1 May 6 22:15:00.416: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30167 + echo hostName nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:00.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:01.052: INFO: rc: 1 May 6 22:15:01.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:01.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:02.041: INFO: rc: 1 May 6 22:15:02.041: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:02.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:03.045: INFO: rc: 1 May 6 22:15:03.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:03.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:04.053: INFO: rc: 1 May 6 22:15:04.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:04.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:05.052: INFO: rc: 1 May 6 22:15:05.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:05.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:06.054: INFO: rc: 1 May 6 22:15:06.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:06.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:07.046: INFO: rc: 1 May 6 22:15:07.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:07.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:09.036: INFO: rc: 1 May 6 22:15:09.036: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:09.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:10.078: INFO: rc: 1 May 6 22:15:10.078: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:10.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:11.081: INFO: rc: 1 May 6 22:15:11.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:11.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:12.043: INFO: rc: 1 May 6 22:15:12.043: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30167 + echo hostName nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:12.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:13.071: INFO: rc: 1 May 6 22:15:13.071: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:13.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:14.057: INFO: rc: 1 May 6 22:15:14.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:14.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:15.045: INFO: rc: 1 May 6 22:15:15.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:15.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:16.059: INFO: rc: 1 May 6 22:15:16.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:16.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:17.062: INFO: rc: 1 May 6 22:15:17.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:17.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:18.158: INFO: rc: 1 May 6 22:15:18.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:18.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:19.082: INFO: rc: 1 May 6 22:15:19.082: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:19.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:20.053: INFO: rc: 1 May 6 22:15:20.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:20.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:21.105: INFO: rc: 1 May 6 22:15:21.105: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:21.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:22.069: INFO: rc: 1 May 6 22:15:22.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:22.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:23.057: INFO: rc: 1 May 6 22:15:23.057: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:23.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:24.034: INFO: rc: 1 May 6 22:15:24.034: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:24.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:25.067: INFO: rc: 1 May 6 22:15:25.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:25.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:26.055: INFO: rc: 1 May 6 22:15:26.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:26.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:27.067: INFO: rc: 1 May 6 22:15:27.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:27.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:28.064: INFO: rc: 1 May 6 22:15:28.064: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:28.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:29.050: INFO: rc: 1 May 6 22:15:29.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:29.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:30.055: INFO: rc: 1 May 6 22:15:30.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:30.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:31.039: INFO: rc: 1 May 6 22:15:31.039: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:31.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:32.050: INFO: rc: 1 May 6 22:15:32.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo+ hostNamenc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:32.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:33.438: INFO: rc: 1 May 6 22:15:33.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:33.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:34.042: INFO: rc: 1 May 6 22:15:34.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:34.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:35.067: INFO: rc: 1 May 6 22:15:35.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:35.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:36.051: INFO: rc: 1 May 6 22:15:36.051: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:36.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:37.074: INFO: rc: 1 May 6 22:15:37.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:37.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:39.189: INFO: rc: 1 May 6 22:15:39.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:39.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:40.128: INFO: rc: 1 May 6 22:15:40.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:40.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:41.059: INFO: rc: 1 May 6 22:15:41.059: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:41.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:42.045: INFO: rc: 1 May 6 22:15:42.045: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:42.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:43.081: INFO: rc: 1 May 6 22:15:43.081: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:43.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:44.402: INFO: rc: 1 May 6 22:15:44.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:44.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:45.252: INFO: rc: 1 May 6 22:15:45.252: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:45.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:46.263: INFO: rc: 1 May 6 22:15:46.263: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:46.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:47.343: INFO: rc: 1 May 6 22:15:47.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:47.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:48.068: INFO: rc: 1 May 6 22:15:48.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:48.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:49.073: INFO: rc: 1 May 6 22:15:49.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:49.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:50.109: INFO: rc: 1 May 6 22:15:50.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:50.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:51.336: INFO: rc: 1 May 6 22:15:51.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:51.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:52.132: INFO: rc: 1 May 6 22:15:52.132: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:52.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:53.048: INFO: rc: 1 May 6 22:15:53.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:53.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:54.075: INFO: rc: 1 May 6 22:15:54.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:54.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:55.042: INFO: rc: 1 May 6 22:15:55.042: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:55.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:56.075: INFO: rc: 1 May 6 22:15:56.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:56.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:57.054: INFO: rc: 1 May 6 22:15:57.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:57.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:58.064: INFO: rc: 1 May 6 22:15:58.064: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:58.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:15:59.188: INFO: rc: 1 May 6 22:15:59.188: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:59.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:00.221: INFO: rc: 1 May 6 22:16:00.222: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:00.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:01.101: INFO: rc: 1 May 6 22:16:01.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:01.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:02.263: INFO: rc: 1 May 6 22:16:02.264: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:02.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:03.030: INFO: rc: 1 May 6 22:16:03.030: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:03.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:04.068: INFO: rc: 1 May 6 22:16:04.068: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:04.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:05.058: INFO: rc: 1 May 6 22:16:05.058: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + + echonc hostName -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:05.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:06.046: INFO: rc: 1 May 6 22:16:06.046: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:06.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:07.038: INFO: rc: 1 May 6 22:16:07.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:07.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:09.103: INFO: rc: 1 May 6 22:16:09.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:09.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:10.050: INFO: rc: 1 May 6 22:16:10.050: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:10.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:11.075: INFO: rc: 1 May 6 22:16:11.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:11.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:12.103: INFO: rc: 1 May 6 22:16:12.103: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:12.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:13.074: INFO: rc: 1 May 6 22:16:13.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:13.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:14.038: INFO: rc: 1 May 6 22:16:14.038: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:14.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:15.054: INFO: rc: 1 May 6 22:16:15.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:15.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:16.215: INFO: rc: 1 May 6 22:16:16.216: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:16.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:17.231: INFO: rc: 1 May 6 22:16:17.232: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30167 + echo hostName nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:17.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:18.053: INFO: rc: 1 May 6 22:16:18.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:18.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:19.479: INFO: rc: 1 May 6 22:16:19.479: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:19.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:20.232: INFO: rc: 1 May 6 22:16:20.232: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:20.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:21.161: INFO: rc: 1 May 6 22:16:21.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:21.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:22.133: INFO: rc: 1 May 6 22:16:22.133: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:22.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:23.069: INFO: rc: 1 May 6 22:16:23.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:23.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:24.147: INFO: rc: 1 May 6 22:16:24.147: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:24.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:25.075: INFO: rc: 1 May 6 22:16:25.075: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:25.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:26.053: INFO: rc: 1 May 6 22:16:26.053: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:26.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:27.052: INFO: rc: 1 May 6 22:16:27.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:27.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:28.067: INFO: rc: 1 May 6 22:16:28.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:28.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:29.090: INFO: rc: 1 May 6 22:16:29.090: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:29.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:30.052: INFO: rc: 1 May 6 22:16:30.052: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:30.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:31.069: INFO: rc: 1 May 6 22:16:31.069: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:31.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:32.060: INFO: rc: 1 May 6 22:16:32.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:32.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:33.070: INFO: rc: 1 May 6 22:16:33.070: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:33.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:34.108: INFO: rc: 1 May 6 22:16:34.108: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:34.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:35.021: INFO: rc: 1 May 6 22:16:35.021: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:35.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:36.062: INFO: rc: 1 May 6 22:16:36.062: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:36.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:37.055: INFO: rc: 1 May 6 22:16:37.055: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 30167 + echo hostName nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:37.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:39.118: INFO: rc: 1 May 6 22:16:39.118: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:39.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:40.049: INFO: rc: 1 May 6 22:16:40.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:40.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:41.074: INFO: rc: 1 May 6 22:16:41.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:41.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:42.092: INFO: rc: 1 May 6 22:16:42.092: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:42.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:43.054: INFO: rc: 1 May 6 22:16:43.054: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:43.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:44.066: INFO: rc: 1 May 6 22:16:44.066: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:44.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167' May 6 22:16:44.348: INFO: rc: 1 May 6 22:16:44.348: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-6314 exec execpod-affinityhwsc7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 30167: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 30167 nc: connect to 10.10.190.207 port 30167 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:44.349: FAIL: Unexpected error: <*errors.errorString | 0xc004d48290>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30167 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30167 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc0010e9b80, 0x77b33d8, 0xc00368e420, 0xc00812ec80, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 +0x625 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2531 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1862 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001482d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001482d80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001482d80, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 6 22:16:44.350: INFO: Cleaning up the exec pod STEP: deleting ReplicationController affinity-nodeport-transition in namespace services-6314, will wait for the garbage collector to delete the pods May 6 22:16:44.427: INFO: Deleting ReplicationController affinity-nodeport-transition took: 4.493816ms May 6 22:16:44.528: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 100.506314ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-6314". STEP: Found 27 events. May 6 22:16:56.845: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-fthrd: { } Scheduled: Successfully assigned services-6314/affinity-nodeport-transition-fthrd to node1 May 6 22:16:56.845: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-mclwf: { } Scheduled: Successfully assigned services-6314/affinity-nodeport-transition-mclwf to node2 May 6 22:16:56.845: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for affinity-nodeport-transition-r6tgq: { } Scheduled: Successfully assigned services-6314/affinity-nodeport-transition-r6tgq to node2 May 6 22:16:56.845: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpod-affinityhwsc7: { } Scheduled: Successfully assigned services-6314/execpod-affinityhwsc7 to node2 May 6 22:16:56.845: INFO: At 2022-05-06 22:14:29 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-r6tgq May 6 22:16:56.845: INFO: At 2022-05-06 22:14:29 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-fthrd May 6 22:16:56.845: INFO: At 2022-05-06 22:14:29 +0000 UTC - event for affinity-nodeport-transition: {replication-controller } SuccessfulCreate: Created pod: affinity-nodeport-transition-mclwf May 6 22:16:56.845: INFO: At 2022-05-06 22:14:32 +0000 UTC - event for affinity-nodeport-transition-fthrd: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:16:56.845: INFO: At 2022-05-06 22:14:32 +0000 UTC - event for affinity-nodeport-transition-fthrd: {kubelet node1} Created: Created container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:14:32 +0000 UTC - event for affinity-nodeport-transition-fthrd: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 280.780384ms May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-fthrd: {kubelet node1} Started: Started container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-mclwf: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 505.498168ms May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-mclwf: {kubelet node2} Started: Started container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-mclwf: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-mclwf: {kubelet node2} Created: Created container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-r6tgq: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 288.579263ms May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-r6tgq: {kubelet node2} Created: Created container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-r6tgq: {kubelet node2} Started: Started container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:14:33 +0000 UTC - event for affinity-nodeport-transition-r6tgq: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:16:56.845: INFO: At 2022-05-06 22:14:37 +0000 UTC - event for execpod-affinityhwsc7: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:16:56.845: INFO: At 2022-05-06 22:14:38 +0000 UTC - event for execpod-affinityhwsc7: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 1.178593471s May 6 22:16:56.845: INFO: At 2022-05-06 22:14:39 +0000 UTC - event for execpod-affinityhwsc7: {kubelet node2} Started: Started container agnhost-container May 6 22:16:56.845: INFO: At 2022-05-06 22:14:39 +0000 UTC - event for execpod-affinityhwsc7: {kubelet node2} Created: Created container agnhost-container May 6 22:16:56.845: INFO: At 2022-05-06 22:16:44 +0000 UTC - event for affinity-nodeport-transition-fthrd: {kubelet node1} Killing: Stopping container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:16:44 +0000 UTC - event for affinity-nodeport-transition-mclwf: {kubelet node2} Killing: Stopping container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:16:44 +0000 UTC - event for affinity-nodeport-transition-r6tgq: {kubelet node2} Killing: Stopping container affinity-nodeport-transition May 6 22:16:56.845: INFO: At 2022-05-06 22:16:44 +0000 UTC - event for execpod-affinityhwsc7: {kubelet node2} Killing: Stopping container agnhost-container May 6 22:16:56.848: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:16:56.848: INFO: May 6 22:16:56.851: INFO: Logging node info for node master1 May 6 22:16:56.854: INFO: Node Info: &Node{ObjectMeta:{master1 3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 48671 0 2022-05-06 20:07:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:53 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:53 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:53 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:16:53 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:16:56.855: INFO: Logging kubelet events for node master1 May 6 22:16:56.857: INFO: Logging pods the kubelet thinks is on node master1 May 6 22:16:56.889: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded) May 6 22:16:56.889: INFO: Container docker-registry ready: true, restart count 0 May 6 22:16:56.889: INFO: Container nginx ready: true, restart count 0 May 6 22:16:56.889: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.889: INFO: Container kube-scheduler ready: true, restart count 0 May 6 22:16:56.889: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.889: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:16:56.889: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:16:56.889: INFO: Init container install-cni ready: true, restart count 0 May 6 22:16:56.889: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:16:56.889: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.889: INFO: Container coredns ready: true, restart count 1 May 6 22:16:56.889: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.889: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:16:56.889: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.889: INFO: Container kube-controller-manager ready: true, restart count 2 May 6 22:16:56.889: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.889: INFO: Container kube-multus ready: true, restart count 1 May 6 22:16:56.889: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:16:56.889: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:16:56.889: INFO: Container node-exporter ready: true, restart count 0 May 6 22:16:56.970: INFO: Latency metrics for node master1 May 6 22:16:56.970: INFO: Logging node info for node master2 May 6 22:16:56.973: INFO: Node Info: &Node{ObjectMeta:{master2 0aed38bc-6408-4920-b364-7d6b9bff7102 48560 0 2022-05-06 20:08:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:49 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:49 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:49 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:16:49 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:16:56.974: INFO: Logging kubelet events for node master2 May 6 22:16:56.976: INFO: Logging pods the kubelet thinks is on node master2 May 6 22:16:56.990: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.990: INFO: Container kube-multus ready: true, restart count 1 May 6 22:16:56.990: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.990: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:16:56.990: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.990: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:16:56.990: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:16:56.990: INFO: Init container install-cni ready: true, restart count 0 May 6 22:16:56.990: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:16:56.990: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:16:56.990: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:16:56.990: INFO: Container node-exporter ready: true, restart count 0 May 6 22:16:56.990: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.990: INFO: Container kube-controller-manager ready: true, restart count 1 May 6 22:16:56.990: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.990: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:16:56.990: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded) May 6 22:16:56.990: INFO: Container autoscaler ready: true, restart count 1 May 6 22:16:57.070: INFO: Latency metrics for node master2 May 6 22:16:57.071: INFO: Logging node info for node master3 May 6 22:16:57.073: INFO: Node Info: &Node{ObjectMeta:{master3 1cc41c26-3708-4912-8ff5-aa83b70d989e 48506 0 2022-05-06 20:08:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:47 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:47 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:47 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:16:47 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:16:57.073: INFO: Logging kubelet events for node master3 May 6 22:16:57.074: INFO: Logging pods the kubelet thinks is on node master3 May 6 22:16:57.088: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.088: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:16:57.088: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:16:57.088: INFO: Init container install-cni ready: true, restart count 2 May 6 22:16:57.088: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:16:57.088: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:16:57.088: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:16:57.088: INFO: Container node-exporter ready: true, restart count 0 May 6 22:16:57.088: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.088: INFO: Container kube-controller-manager ready: true, restart count 3 May 6 22:16:57.088: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.088: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:16:57.088: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.088: INFO: Container coredns ready: true, restart count 1 May 6 22:16:57.088: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.088: INFO: Container nfd-controller ready: true, restart count 0 May 6 22:16:57.088: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.088: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:16:57.088: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.088: INFO: Container kube-multus ready: true, restart count 1 May 6 22:16:57.173: INFO: Latency metrics for node master3 May 6 22:16:57.173: INFO: Logging node info for node node1 May 6 22:16:57.177: INFO: Node Info: &Node{ObjectMeta:{node1 851b0a69-efd4-49b7-98ef-f0cfe2d311c6 48575 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:21:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:50 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:50 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:50 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:16:50 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:16:57.178: INFO: Logging kubelet events for node node1 May 6 22:16:57.180: INFO: Logging pods the kubelet thinks is on node node1 May 6 22:16:57.195: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.195: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:16:57.195: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.195: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:16:57.195: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:16:57.195: INFO: Init container install-cni ready: true, restart count 2 May 6 22:16:57.195: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:16:57.195: INFO: forbid-27531256-kn4xx started at 2022-05-06 22:16:00 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.195: INFO: Container c ready: true, restart count 0 May 6 22:16:57.195: INFO: externalname-service-4zmvl started at 2022-05-06 22:14:52 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.195: INFO: Container externalname-service ready: true, restart count 0 May 6 22:16:57.195: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.195: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:16:57.195: INFO: busybox-d589a3da-337d-49b5-914f-9d5a5b28f502 started at 2022-05-06 22:13:17 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.195: INFO: Container busybox ready: true, restart count 0 May 6 22:16:57.195: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.195: INFO: Container kube-multus ready: true, restart count 1 May 6 22:16:57.195: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:16:57.195: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:16:57.195: INFO: Container node-exporter ready: true, restart count 0 May 6 22:16:57.195: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:16:57.195: INFO: Container collectd ready: true, restart count 0 May 6 22:16:57.195: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:16:57.195: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:16:57.195: INFO: pod-subpath-test-projected-dc72 started at 2022-05-06 22:16:52 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.195: INFO: Container test-container-subpath-projected-dc72 ready: true, restart count 0 May 6 22:16:57.195: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded) May 6 22:16:57.196: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:16:57.196: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:16:57.196: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded) May 6 22:16:57.196: INFO: Container config-reloader ready: true, restart count 0 May 6 22:16:57.196: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:16:57.196: INFO: Container grafana ready: true, restart count 0 May 6 22:16:57.196: INFO: Container prometheus ready: true, restart count 1 May 6 22:16:57.196: INFO: busybox-3bb61db8-5678-4050-9203-125f691b2462 started at 2022-05-06 22:16:46 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.196: INFO: Container busybox ready: true, restart count 0 May 6 22:16:57.196: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.196: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:16:57.196: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded) May 6 22:16:57.196: INFO: Container discover ready: false, restart count 0 May 6 22:16:57.196: INFO: Container init ready: false, restart count 0 May 6 22:16:57.196: INFO: Container install ready: false, restart count 0 May 6 22:16:57.196: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded) May 6 22:16:57.196: INFO: Container nodereport ready: true, restart count 0 May 6 22:16:57.196: INFO: Container reconcile ready: true, restart count 0 May 6 22:16:57.433: INFO: Latency metrics for node node1 May 6 22:16:57.433: INFO: Logging node info for node node2 May 6 22:16:57.436: INFO: Node Info: &Node{ObjectMeta:{node2 2dab2a66-f2eb-49db-9725-3dda82cede11 48700 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:56 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:56 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:56 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:16:56 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:16:57.437: INFO: Logging kubelet events for node node2 May 6 22:16:57.440: INFO: Logging pods the kubelet thinks is on node node2 May 6 22:16:57.460: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.460: INFO: Container tas-extender ready: true, restart count 0 May 6 22:16:57.460: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.460: INFO: Container kube-multus ready: true, restart count 1 May 6 22:16:57.460: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded) May 6 22:16:57.460: INFO: Container nodereport ready: true, restart count 0 May 6 22:16:57.460: INFO: Container reconcile ready: true, restart count 0 May 6 22:16:57.460: INFO: ss2-0 started at (0+0 container statuses recorded) May 6 22:16:57.461: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:16:57.461: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:16:57.461: INFO: Container collectd ready: true, restart count 0 May 6 22:16:57.461: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:16:57.461: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:16:57.461: INFO: externalname-service-gqgjx started at 2022-05-06 22:14:52 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container externalname-service ready: true, restart count 0 May 6 22:16:57.461: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded) May 6 22:16:57.461: INFO: Container discover ready: false, restart count 0 May 6 22:16:57.461: INFO: Container init ready: false, restart count 0 May 6 22:16:57.461: INFO: Container install ready: false, restart count 0 May 6 22:16:57.461: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:16:57.461: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:16:57.461: INFO: Container node-exporter ready: true, restart count 0 May 6 22:16:57.461: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:16:57.461: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:16:57.461: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:16:57.461: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:16:57.461: INFO: ss2-1 started at 2022-05-06 22:16:18 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container webserver ready: true, restart count 0 May 6 22:16:57.461: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:16:57.461: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:16:57.461: INFO: Init container install-cni ready: true, restart count 1 May 6 22:16:57.461: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:16:57.461: INFO: execpodwth54 started at 2022-05-06 22:14:58 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container agnhost-container ready: true, restart count 0 May 6 22:16:57.461: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:16:57.461: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:16:57.692: INFO: Latency metrics for node node2 May 6 22:16:57.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6314" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [148.049 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:16:44.349: Unexpected error: <*errors.errorString | 0xc004d48290>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30167 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:30167 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":387,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:57.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward API volume plugin May 6 22:16:57.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7dcfb742-037d-44a0-9b99-4c1608a6e468" in namespace "downward-api-4408" to be "Succeeded or Failed" May 6 22:16:57.786: INFO: Pod "downwardapi-volume-7dcfb742-037d-44a0-9b99-4c1608a6e468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.117522ms May 6 22:16:59.790: INFO: Pod "downwardapi-volume-7dcfb742-037d-44a0-9b99-4c1608a6e468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006132811s May 6 22:17:01.794: INFO: Pod "downwardapi-volume-7dcfb742-037d-44a0-9b99-4c1608a6e468": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010009034s STEP: Saw pod success May 6 22:17:01.794: INFO: Pod "downwardapi-volume-7dcfb742-037d-44a0-9b99-4c1608a6e468" satisfied condition "Succeeded or Failed" May 6 22:17:01.797: INFO: Trying to get logs from node node2 pod downwardapi-volume-7dcfb742-037d-44a0-9b99-4c1608a6e468 container client-container: STEP: delete the pod May 6 22:17:01.812: INFO: Waiting for pod downwardapi-volume-7dcfb742-037d-44a0-9b99-4c1608a6e468 to disappear May 6 22:17:01.814: INFO: Pod downwardapi-volume-7dcfb742-037d-44a0-9b99-4c1608a6e468 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:01.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4408" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":406,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:01.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replica set "test-rs" that asks for more than the allowed pod quota May 6 22:17:01.872: INFO: Pod name sample-pod: Found 0 pods out of 1 May 6 22:17:06.880: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the replicaset Spec.Replicas was modified STEP: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:06.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2516" for this suite. • [SLOW TEST:5.061 seconds] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":23,"skipped":415,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:14:52.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a service externalname-service with the type=ExternalName in namespace services-2135 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-2135 I0506 22:14:52.607108 27 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2135, replica count: 2 I0506 22:14:55.658486 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 22:14:58.659430 27 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 22:14:58.659: INFO: Creating new exec pod May 6 22:15:03.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' May 6 22:15:03.947: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" May 6 22:15:03.947: INFO: stdout: "externalname-service-gqgjx" May 6 22:15:03.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.233.14.214 80' May 6 22:15:04.192: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.233.14.214 80\nConnection to 10.233.14.214 80 port [tcp/http] succeeded!\n" May 6 22:15:04.192: INFO: stdout: "externalname-service-4zmvl" May 6 22:15:04.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:04.437: INFO: rc: 1 May 6 22:15:04.437: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:05.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:05.679: INFO: rc: 1 May 6 22:15:05.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:06.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:06.709: INFO: rc: 1 May 6 22:15:06.709: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:07.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:07.679: INFO: rc: 1 May 6 22:15:07.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:08.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:09.102: INFO: rc: 1 May 6 22:15:09.102: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:09.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:09.836: INFO: rc: 1 May 6 22:15:09.837: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:10.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:10.728: INFO: rc: 1 May 6 22:15:10.729: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:11.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:11.762: INFO: rc: 1 May 6 22:15:11.762: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:12.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:12.682: INFO: rc: 1 May 6 22:15:12.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:13.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:13.679: INFO: rc: 1 May 6 22:15:13.679: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:14.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:14.686: INFO: rc: 1 May 6 22:15:14.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:15.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:15.696: INFO: rc: 1 May 6 22:15:15.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:16.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:16.680: INFO: rc: 1 May 6 22:15:16.680: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:17.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:17.694: INFO: rc: 1 May 6 22:15:17.694: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:18.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:18.894: INFO: rc: 1 May 6 22:15:18.894: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:19.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:19.757: INFO: rc: 1 May 6 22:15:19.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:20.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:20.674: INFO: rc: 1 May 6 22:15:20.674: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:21.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:21.811: INFO: rc: 1 May 6 22:15:21.811: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:22.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:22.688: INFO: rc: 1 May 6 22:15:22.688: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:23.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:23.697: INFO: rc: 1 May 6 22:15:23.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:24.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:24.696: INFO: rc: 1 May 6 22:15:24.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:25.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:25.691: INFO: rc: 1 May 6 22:15:25.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:26.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:26.695: INFO: rc: 1 May 6 22:15:26.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:27.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:27.703: INFO: rc: 1 May 6 22:15:27.703: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:28.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:28.676: INFO: rc: 1 May 6 22:15:28.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:29.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:29.701: INFO: rc: 1 May 6 22:15:29.701: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:30.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:30.691: INFO: rc: 1 May 6 22:15:30.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:31.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:31.705: INFO: rc: 1 May 6 22:15:31.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:32.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:32.724: INFO: rc: 1 May 6 22:15:32.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:33.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:33.676: INFO: rc: 1 May 6 22:15:33.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:34.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:34.684: INFO: rc: 1 May 6 22:15:34.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:35.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:35.683: INFO: rc: 1 May 6 22:15:35.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:36.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:36.848: INFO: rc: 1 May 6 22:15:36.848: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:37.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:37.683: INFO: rc: 1 May 6 22:15:37.683: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:38.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:39.133: INFO: rc: 1 May 6 22:15:39.133: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:39.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:39.833: INFO: rc: 1 May 6 22:15:39.833: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:40.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:40.692: INFO: rc: 1 May 6 22:15:40.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:41.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:41.710: INFO: rc: 1 May 6 22:15:41.710: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:42.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:42.707: INFO: rc: 1 May 6 22:15:42.707: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:43.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:43.691: INFO: rc: 1 May 6 22:15:43.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:44.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:45.241: INFO: rc: 1 May 6 22:15:45.241: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:45.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:46.260: INFO: rc: 1 May 6 22:15:46.260: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:46.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:46.998: INFO: rc: 1 May 6 22:15:46.998: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:47.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:47.723: INFO: rc: 1 May 6 22:15:47.723: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:48.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:49.067: INFO: rc: 1 May 6 22:15:49.067: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:49.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:50.013: INFO: rc: 1 May 6 22:15:50.013: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:50.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:50.743: INFO: rc: 1 May 6 22:15:50.743: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:51.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:51.780: INFO: rc: 1 May 6 22:15:51.780: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:52.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:53.033: INFO: rc: 1 May 6 22:15:53.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:53.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:53.817: INFO: rc: 1 May 6 22:15:53.817: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:54.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:54.697: INFO: rc: 1 May 6 22:15:54.697: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:55.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:55.686: INFO: rc: 1 May 6 22:15:55.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:56.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:56.700: INFO: rc: 1 May 6 22:15:56.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:57.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:57.803: INFO: rc: 1 May 6 22:15:57.803: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:58.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:15:58.720: INFO: rc: 1 May 6 22:15:58.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:15:59.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:00.168: INFO: rc: 1 May 6 22:16:00.168: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:00.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:00.789: INFO: rc: 1 May 6 22:16:00.789: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:01.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:01.684: INFO: rc: 1 May 6 22:16:01.684: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:02.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:02.720: INFO: rc: 1 May 6 22:16:02.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:03.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:03.670: INFO: rc: 1 May 6 22:16:03.670: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:04.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:04.690: INFO: rc: 1 May 6 22:16:04.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:05.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:05.700: INFO: rc: 1 May 6 22:16:05.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:06.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:06.695: INFO: rc: 1 May 6 22:16:06.695: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:07.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:07.686: INFO: rc: 1 May 6 22:16:07.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:08.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:09.040: INFO: rc: 1 May 6 22:16:09.040: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:09.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:09.700: INFO: rc: 1 May 6 22:16:09.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:10.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:10.684: INFO: rc: 1 May 6 22:16:10.685: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:11.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:11.786: INFO: rc: 1 May 6 22:16:11.786: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32117 + echo hostName nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:12.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:12.676: INFO: rc: 1 May 6 22:16:12.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:13.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:13.704: INFO: rc: 1 May 6 22:16:13.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:14.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:14.700: INFO: rc: 1 May 6 22:16:14.700: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:15.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:15.710: INFO: rc: 1 May 6 22:16:15.710: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:16.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:16.706: INFO: rc: 1 May 6 22:16:16.706: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:17.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:17.711: INFO: rc: 1 May 6 22:16:17.711: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:18.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:18.738: INFO: rc: 1 May 6 22:16:18.738: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:19.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:19.720: INFO: rc: 1 May 6 22:16:19.720: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:20.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:20.724: INFO: rc: 1 May 6 22:16:20.724: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:21.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:21.793: INFO: rc: 1 May 6 22:16:21.793: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:22.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:22.718: INFO: rc: 1 May 6 22:16:22.718: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:23.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:23.779: INFO: rc: 1 May 6 22:16:23.779: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:24.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:24.740: INFO: rc: 1 May 6 22:16:24.740: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:25.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:25.766: INFO: rc: 1 May 6 22:16:25.766: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:26.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:26.716: INFO: rc: 1 May 6 22:16:26.716: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:27.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:27.686: INFO: rc: 1 May 6 22:16:27.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:28.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:29.079: INFO: rc: 1 May 6 22:16:29.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:29.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:29.692: INFO: rc: 1 May 6 22:16:29.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:30.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:30.689: INFO: rc: 1 May 6 22:16:30.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:31.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:31.692: INFO: rc: 1 May 6 22:16:31.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:32.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:32.708: INFO: rc: 1 May 6 22:16:32.708: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:33.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:33.758: INFO: rc: 1 May 6 22:16:33.758: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:34.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:34.698: INFO: rc: 1 May 6 22:16:34.698: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:35.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:35.691: INFO: rc: 1 May 6 22:16:35.691: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:36.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:36.695: INFO: rc: 1 May 6 22:16:36.696: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + nc -v -t -w 2 10.10.190.207 32117 + echo hostName nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:37.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:37.687: INFO: rc: 1 May 6 22:16:37.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:38.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:39.106: INFO: rc: 1 May 6 22:16:39.106: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:39.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:39.676: INFO: rc: 1 May 6 22:16:39.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:40.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:40.681: INFO: rc: 1 May 6 22:16:40.681: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:41.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:41.757: INFO: rc: 1 May 6 22:16:41.757: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:42.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:42.732: INFO: rc: 1 May 6 22:16:42.733: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:43.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:43.692: INFO: rc: 1 May 6 22:16:43.692: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:44.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:44.910: INFO: rc: 1 May 6 22:16:44.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:45.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:45.713: INFO: rc: 1 May 6 22:16:45.713: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:46.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:46.661: INFO: rc: 1 May 6 22:16:46.661: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:47.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:47.897: INFO: rc: 1 May 6 22:16:47.897: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:48.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:48.763: INFO: rc: 1 May 6 22:16:48.763: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:49.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:49.931: INFO: rc: 1 May 6 22:16:49.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:50.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:50.730: INFO: rc: 1 May 6 22:16:50.730: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:51.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:51.682: INFO: rc: 1 May 6 22:16:51.682: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:52.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:52.704: INFO: rc: 1 May 6 22:16:52.704: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:53.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:53.739: INFO: rc: 1 May 6 22:16:53.739: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:54.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:55.018: INFO: rc: 1 May 6 22:16:55.018: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:55.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:55.769: INFO: rc: 1 May 6 22:16:55.769: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:56.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:56.705: INFO: rc: 1 May 6 22:16:56.705: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:57.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:57.677: INFO: rc: 1 May 6 22:16:57.677: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:58.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:58.889: INFO: rc: 1 May 6 22:16:58.889: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:16:59.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:16:59.686: INFO: rc: 1 May 6 22:16:59.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:17:00.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:17:00.676: INFO: rc: 1 May 6 22:17:00.676: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:17:01.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:17:01.750: INFO: rc: 1 May 6 22:17:01.750: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:17:02.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:17:02.687: INFO: rc: 1 May 6 22:17:02.687: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:17:03.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:17:03.686: INFO: rc: 1 May 6 22:17:03.686: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:17:04.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:17:04.689: INFO: rc: 1 May 6 22:17:04.689: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:17:04.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117' May 6 22:17:04.935: INFO: rc: 1 May 6 22:17:04.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=services-2135 exec execpodwth54 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.10.190.207 32117: Command stdout: stderr: + echo hostName + nc -v -t -w 2 10.10.190.207 32117 nc: connect to 10.10.190.207 port 32117 (tcp) failed: Connection refused command terminated with exit code 1 error: exit status 1 Retrying... May 6 22:17:04.936: FAIL: Unexpected error: <*errors.errorString | 0xc000b666d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32117 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32117 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.15() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 +0x358 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc001706a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc001706a80) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc001706a80, 0x70f99e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 May 6 22:17:04.937: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 STEP: Collecting events from namespace "services-2135". STEP: Found 17 events. May 6 22:17:04.954: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for execpodwth54: { } Scheduled: Successfully assigned services-2135/execpodwth54 to node2 May 6 22:17:04.954: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-4zmvl: { } Scheduled: Successfully assigned services-2135/externalname-service-4zmvl to node1 May 6 22:17:04.954: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for externalname-service-gqgjx: { } Scheduled: Successfully assigned services-2135/externalname-service-gqgjx to node2 May 6 22:17:04.954: INFO: At 2022-05-06 22:14:52 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-4zmvl May 6 22:17:04.954: INFO: At 2022-05-06 22:14:52 +0000 UTC - event for externalname-service: {replication-controller } SuccessfulCreate: Created pod: externalname-service-gqgjx May 6 22:17:04.954: INFO: At 2022-05-06 22:14:54 +0000 UTC - event for externalname-service-4zmvl: {kubelet node1} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:17:04.954: INFO: At 2022-05-06 22:14:54 +0000 UTC - event for externalname-service-4zmvl: {kubelet node1} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 257.507387ms May 6 22:17:04.954: INFO: At 2022-05-06 22:14:54 +0000 UTC - event for externalname-service-4zmvl: {kubelet node1} Started: Started container externalname-service May 6 22:17:04.954: INFO: At 2022-05-06 22:14:54 +0000 UTC - event for externalname-service-4zmvl: {kubelet node1} Created: Created container externalname-service May 6 22:17:04.954: INFO: At 2022-05-06 22:14:56 +0000 UTC - event for externalname-service-gqgjx: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:17:04.954: INFO: At 2022-05-06 22:14:56 +0000 UTC - event for externalname-service-gqgjx: {kubelet node2} Started: Started container externalname-service May 6 22:17:04.954: INFO: At 2022-05-06 22:14:56 +0000 UTC - event for externalname-service-gqgjx: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 292.437302ms May 6 22:17:04.954: INFO: At 2022-05-06 22:14:56 +0000 UTC - event for externalname-service-gqgjx: {kubelet node2} Created: Created container externalname-service May 6 22:17:04.954: INFO: At 2022-05-06 22:15:00 +0000 UTC - event for execpodwth54: {kubelet node2} Started: Started container agnhost-container May 6 22:17:04.954: INFO: At 2022-05-06 22:15:00 +0000 UTC - event for execpodwth54: {kubelet node2} Created: Created container agnhost-container May 6 22:17:04.954: INFO: At 2022-05-06 22:15:00 +0000 UTC - event for execpodwth54: {kubelet node2} Pulling: Pulling image "k8s.gcr.io/e2e-test-images/agnhost:2.32" May 6 22:17:04.954: INFO: At 2022-05-06 22:15:00 +0000 UTC - event for execpodwth54: {kubelet node2} Pulled: Successfully pulled image "k8s.gcr.io/e2e-test-images/agnhost:2.32" in 293.694197ms May 6 22:17:04.957: INFO: POD NODE PHASE GRACE CONDITIONS May 6 22:17:04.957: INFO: execpodwth54 node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:58 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:15:01 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:15:01 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:58 +0000 UTC }] May 6 22:17:04.957: INFO: externalname-service-4zmvl node1 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:52 +0000 UTC }] May 6 22:17:04.957: INFO: externalname-service-gqgjx node2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:57 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:57 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-05-06 22:14:52 +0000 UTC }] May 6 22:17:04.957: INFO: May 6 22:17:04.961: INFO: Logging node info for node master1 May 6 22:17:04.963: INFO: Node Info: &Node{ObjectMeta:{master1 3ea7d7b2-d1dd-4f70-bd03-4c3ec5a8e02c 48812 0 2022-05-06 20:07:30 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master1 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.202 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:07:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.0.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:15:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:03 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:03 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:03 +0000 UTC,LastTransitionTime:2022-05-06 20:07:27 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:17:03 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.202,},NodeAddress{Type:Hostname,Address:master1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:fddab730508c43d4ba9efb575f362bc6,SystemUUID:00ACFB60-0631-E711-906E-0017A4403562,BootID:8708efb4-3ff3-4f9b-a116-eb7702a71201,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 tasextender:latest localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[registry@sha256:1cd9409a311350c3072fe510b52046f104416376c126a479cef9a4dfe692cf57 registry:2.7.0],SizeBytes:24191168,},ContainerImage{Names:[nginx@sha256:b92d3b942c8b84da889ac3dc6e83bd20ffb8cd2d8298eba92c8b0bf88d52f03e nginx:1.20.1-alpine],SizeBytes:22721538,},ContainerImage{Names:[@ :],SizeBytes:5577654,},ContainerImage{Names:[alpine@sha256:c0e9560cda118f9ec63ddefb4a173a2b2a0347082d7dff7dc14272e7841a5b5a alpine:3.12.1],SizeBytes:5573013,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:17:04.964: INFO: Logging kubelet events for node master1 May 6 22:17:04.967: INFO: Logging pods the kubelet thinks is on node master1 May 6 22:17:04.976: INFO: kube-flannel-dz2ld started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:17:04.976: INFO: Init container install-cni ready: true, restart count 0 May 6 22:17:04.976: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:17:04.976: INFO: coredns-8474476ff8-jtj8t started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:17:04.976: INFO: Container coredns ready: true, restart count 1 May 6 22:17:04.976: INFO: container-registry-65d7c44b96-5pp99 started at 2022-05-06 20:14:46 +0000 UTC (0+2 container statuses recorded) May 6 22:17:04.976: INFO: Container docker-registry ready: true, restart count 0 May 6 22:17:04.976: INFO: Container nginx ready: true, restart count 0 May 6 22:17:04.976: INFO: kube-scheduler-master1 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:17:04.976: INFO: Container kube-scheduler ready: true, restart count 0 May 6 22:17:04.976: INFO: kube-proxy-bnqzh started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:17:04.976: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:17:04.976: INFO: kube-multus-ds-amd64-pdpj8 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:17:04.976: INFO: Container kube-multus ready: true, restart count 1 May 6 22:17:04.976: INFO: node-exporter-6wcwp started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:17:04.976: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:17:04.976: INFO: Container node-exporter ready: true, restart count 0 May 6 22:17:04.976: INFO: kube-apiserver-master1 started at 2022-05-06 20:08:39 +0000 UTC (0+1 container statuses recorded) May 6 22:17:04.976: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:17:04.976: INFO: kube-controller-manager-master1 started at 2022-05-06 20:16:36 +0000 UTC (0+1 container statuses recorded) May 6 22:17:04.976: INFO: Container kube-controller-manager ready: true, restart count 2 May 6 22:17:05.066: INFO: Latency metrics for node master1 May 6 22:17:05.066: INFO: Logging node info for node master2 May 6 22:17:05.068: INFO: Node Info: &Node{ObjectMeta:{master2 0aed38bc-6408-4920-b364-7d6b9bff7102 48743 0 2022-05-06 20:08:00 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master2 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.203 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {kube-controller-manager Update v1 2022-05-06 20:10:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}} {kubelet Update v1 2022-05-06 20:20:42 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234743296 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324579328 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:12 +0000 UTC,LastTransitionTime:2022-05-06 20:13:12 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:59 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:59 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:59 +0000 UTC,LastTransitionTime:2022-05-06 20:08:00 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:16:59 +0000 UTC,LastTransitionTime:2022-05-06 20:13:05 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.203,},NodeAddress{Type:Hostname,Address:master2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:94f6743f72cc461cb731cffce21ae835,SystemUUID:00A0DE53-E51D-E711-906E-0017A4403562,BootID:340a40ae-5d7c-47da-a6f4-a4b5b64d56f7,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:17:05.068: INFO: Logging kubelet events for node master2 May 6 22:17:05.070: INFO: Logging pods the kubelet thinks is on node master2 May 6 22:17:05.079: INFO: kube-controller-manager-master2 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.079: INFO: Container kube-controller-manager ready: true, restart count 1 May 6 22:17:05.079: INFO: kube-proxy-tr8m9 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.079: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:17:05.079: INFO: dns-autoscaler-7df78bfcfb-srh4b started at 2022-05-06 20:10:54 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.079: INFO: Container autoscaler ready: true, restart count 1 May 6 22:17:05.079: INFO: node-exporter-b26kc started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:17:05.079: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:17:05.079: INFO: Container node-exporter ready: true, restart count 0 May 6 22:17:05.079: INFO: kube-scheduler-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.079: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:17:05.079: INFO: kube-apiserver-master2 started at 2022-05-06 20:08:40 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.079: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:17:05.079: INFO: kube-flannel-4kjc4 started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:17:05.079: INFO: Init container install-cni ready: true, restart count 0 May 6 22:17:05.079: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:17:05.079: INFO: kube-multus-ds-amd64-gd6zv started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.079: INFO: Container kube-multus ready: true, restart count 1 May 6 22:17:05.153: INFO: Latency metrics for node master2 May 6 22:17:05.153: INFO: Logging node info for node master3 May 6 22:17:05.156: INFO: Node Info: &Node{ObjectMeta:{master3 1cc41c26-3708-4912-8ff5-aa83b70d989e 48725 0 2022-05-06 20:08:11 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:master3 kubernetes.io/os:linux node-role.kubernetes.io/control-plane: node-role.kubernetes.io/master: node.kubernetes.io/exclude-from-external-load-balancers:] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.204 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/master.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubeadm Update v1 2022-05-06 20:08:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}},"f:labels":{"f:node-role.kubernetes.io/control-plane":{},"f:node-role.kubernetes.io/master":{},"f:node.kubernetes.io/exclude-from-external-load-balancers":{}}}}} {kube-controller-manager Update v1 2022-05-06 20:09:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}},"f:taints":{}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:17:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/master.version":{}}}}} {kubelet Update v1 2022-05-06 20:18:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{201234739200 0} {} 196518300Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{79550 -3} {} 79550m DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{200324575232 0} {} 195629468Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:10 +0000 UTC,LastTransitionTime:2022-05-06 20:13:10 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:57 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:57 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:16:57 +0000 UTC,LastTransitionTime:2022-05-06 20:08:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:16:57 +0000 UTC,LastTransitionTime:2022-05-06 20:13:06 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.204,},NodeAddress{Type:Hostname,Address:master3,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:045e9ce9dfcd42ef970e1ed3a55941b3,SystemUUID:008B1444-141E-E711-906E-0017A4403562,BootID:ee1f3fa6-4f8f-4726-91f5-b87ee8838a88,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/etcd@sha256:04833b601fa130512450afa45c4fe484fee1293634f34c7ddc231bd193c74017 quay.io/coreos/etcd:v3.4.13],SizeBytes:83790470,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:42454755,},ContainerImage{Names:[k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64@sha256:dce43068853ad396b0fb5ace9a56cc14114e31979e241342d12d04526be1dfcc k8s.gcr.io/cpa/cluster-proportional-autoscaler-amd64:1.8.3],SizeBytes:40647382,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:17:05.156: INFO: Logging kubelet events for node master3 May 6 22:17:05.158: INFO: Logging pods the kubelet thinks is on node master3 May 6 22:17:05.167: INFO: kube-scheduler-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.167: INFO: Container kube-scheduler ready: true, restart count 2 May 6 22:17:05.167: INFO: kube-proxy-m9tv5 started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.167: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:17:05.167: INFO: kube-flannel-2twpc started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:17:05.167: INFO: Init container install-cni ready: true, restart count 2 May 6 22:17:05.167: INFO: Container kube-flannel ready: true, restart count 1 May 6 22:17:05.167: INFO: node-exporter-mcj6x started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:17:05.167: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:17:05.167: INFO: Container node-exporter ready: true, restart count 0 May 6 22:17:05.167: INFO: kube-controller-manager-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.167: INFO: Container kube-controller-manager ready: true, restart count 3 May 6 22:17:05.167: INFO: kube-multus-ds-amd64-mtj2t started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.167: INFO: Container kube-multus ready: true, restart count 1 May 6 22:17:05.167: INFO: coredns-8474476ff8-t4bcd started at 2022-05-06 20:10:52 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.167: INFO: Container coredns ready: true, restart count 1 May 6 22:17:05.167: INFO: node-feature-discovery-controller-cff799f9f-rwzfc started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.167: INFO: Container nfd-controller ready: true, restart count 0 May 6 22:17:05.167: INFO: kube-apiserver-master3 started at 2022-05-06 20:13:06 +0000 UTC (0+1 container statuses recorded) May 6 22:17:05.167: INFO: Container kube-apiserver ready: true, restart count 0 May 6 22:17:05.255: INFO: Latency metrics for node master3 May 6 22:17:05.255: INFO: Logging node info for node node1 May 6 22:17:05.258: INFO: Node Info: &Node{ObjectMeta:{node1 851b0a69-efd4-49b7-98ef-f0cfe2d311c6 48753 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node1 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.207 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.3.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:21:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269608448 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884608000 0} {} BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:24 +0000 UTC,LastTransitionTime:2022-05-06 20:13:24 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:00 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:00 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:00 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:17:00 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.207,},NodeAddress{Type:Hostname,Address:node1,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bae6af61b07b462daf118753f89950b1,SystemUUID:00CDA902-D022-E711-906E-0017A4403562,BootID:871de03d-49a7-4910-8d15-63422e0e629a,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[@ :],SizeBytes:1003954967,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 cmk:v1.5.1 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[aquasec/kube-hunter@sha256:2be6820bc1d7e0f57193a9a27d5a3e16b2fd93c53747b03ce8ca48c6fc323781 aquasec/kube-hunter:0.3.1],SizeBytes:347611549,},ContainerImage{Names:[golang@sha256:db2475a1dbb2149508e5db31d7d77a75e6600d54be645f37681f03f2762169ba golang:alpine3.12],SizeBytes:301186719,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/jessie-dnsutils@sha256:702a992280fb7c3303e84a5801acbb4c9c7fcf48cffe0e9c8be3f0c60f74cf89 k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.4],SizeBytes:253371792,},ContainerImage{Names:[grafana/grafana@sha256:ba39bf5131dcc0464134a3ff0e26e8c6380415249fa725e5f619176601255172 grafana/grafana:7.5.4],SizeBytes:203572842,},ContainerImage{Names:[quay.io/prometheus/prometheus@sha256:b899dbd1b9017b9a379f76ce5b40eead01a62762c4f2057eacef945c3c22d210 quay.io/prometheus/prometheus:v2.22.1],SizeBytes:168344243,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[directxman12/k8s-prometheus-adapter@sha256:2b09a571757a12c0245f2f1a74db4d1b9386ff901cf57f5ce48a0a682bd0e3af directxman12/k8s-prometheus-adapter:v0.8.2],SizeBytes:68230450,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b nfvpe/sriov-device-plugin:latest localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-operator@sha256:850c86bfeda4389bc9c757a9fd17ca5a090ea6b424968178d4467492cfa13921 quay.io/prometheus-operator/prometheus-operator:v0.44.1],SizeBytes:42617274,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[quay.io/prometheus-operator/prometheus-config-reloader@sha256:4dee0fcf1820355ddd6986c1317b555693776c731315544a99d6cc59a7e34ce9 quay.io/prometheus-operator/prometheus-config-reloader:v0.44.1],SizeBytes:13433274,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nonewprivs@sha256:8ac1264691820febacf3aea5d152cbde6d10685731ec14966a9401c6f47a68ac k8s.gcr.io/e2e-test-images/nonewprivs:1.3],SizeBytes:7107254,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[alpine@sha256:c75ac27b49326926b803b9ed43bf088bc220d22556de1bc5f72d742c91398f69 alpine:3.12],SizeBytes:5581590,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:17:05.259: INFO: Logging kubelet events for node node1 May 6 22:17:05.261: INFO: Logging pods the kubelet thinks is on node node1 May 6 22:17:06.220: INFO: forbid-27531256-kn4xx started at 2022-05-06 22:16:00 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container c ready: true, restart count 0 May 6 22:17:06.220: INFO: nginx-proxy-node1 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:17:06.220: INFO: kube-proxy-xc75d started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:17:06.220: INFO: kube-flannel-ph67x started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:17:06.220: INFO: Init container install-cni ready: true, restart count 2 May 6 22:17:06.220: INFO: Container kube-flannel ready: true, restart count 3 May 6 22:17:06.220: INFO: externalname-service-4zmvl started at 2022-05-06 22:14:52 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container externalname-service ready: true, restart count 0 May 6 22:17:06.220: INFO: node-feature-discovery-worker-fbf8d started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:17:06.220: INFO: busybox-d589a3da-337d-49b5-914f-9d5a5b28f502 started at 2022-05-06 22:13:17 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container busybox ready: true, restart count 0 May 6 22:17:06.220: INFO: kube-multus-ds-amd64-2mv45 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container kube-multus ready: true, restart count 1 May 6 22:17:06.220: INFO: node-exporter-hqs4s started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:17:06.220: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:17:06.220: INFO: Container node-exporter ready: true, restart count 0 May 6 22:17:06.220: INFO: collectd-wq9cz started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:17:06.220: INFO: Container collectd ready: true, restart count 0 May 6 22:17:06.220: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:17:06.220: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:17:06.220: INFO: pod-subpath-test-projected-dc72 started at 2022-05-06 22:16:52 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container test-container-subpath-projected-dc72 ready: true, restart count 0 May 6 22:17:06.220: INFO: prometheus-operator-585ccfb458-vrrfv started at 2022-05-06 20:23:12 +0000 UTC (0+2 container statuses recorded) May 6 22:17:06.220: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:17:06.220: INFO: Container prometheus-operator ready: true, restart count 0 May 6 22:17:06.220: INFO: prometheus-k8s-0 started at 2022-05-06 20:23:29 +0000 UTC (0+4 container statuses recorded) May 6 22:17:06.220: INFO: Container config-reloader ready: true, restart count 0 May 6 22:17:06.220: INFO: Container custom-metrics-apiserver ready: true, restart count 0 May 6 22:17:06.220: INFO: Container grafana ready: true, restart count 0 May 6 22:17:06.220: INFO: Container prometheus ready: true, restart count 1 May 6 22:17:06.220: INFO: busybox-3bb61db8-5678-4050-9203-125f691b2462 started at 2022-05-06 22:16:46 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container busybox ready: true, restart count 0 May 6 22:17:06.220: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-b6q29 started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:17:06.220: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:17:06.220: INFO: cmk-init-discover-node1-tp69t started at 2022-05-06 20:21:33 +0000 UTC (0+3 container statuses recorded) May 6 22:17:06.220: INFO: Container discover ready: false, restart count 0 May 6 22:17:06.220: INFO: Container init ready: false, restart count 0 May 6 22:17:06.220: INFO: Container install ready: false, restart count 0 May 6 22:17:06.220: INFO: cmk-trkp8 started at 2022-05-06 20:22:16 +0000 UTC (0+2 container statuses recorded) May 6 22:17:06.220: INFO: Container nodereport ready: true, restart count 0 May 6 22:17:06.220: INFO: Container reconcile ready: true, restart count 0 May 6 22:17:07.734: INFO: Latency metrics for node node1 May 6 22:17:07.734: INFO: Logging node info for node node2 May 6 22:17:07.737: INFO: Node Info: &Node{ObjectMeta:{node2 2dab2a66-f2eb-49db-9725-3dda82cede11 48866 0 2022-05-06 20:09:17 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux cmk.intel.com/cmk-node:true feature.node.kubernetes.io/cpu-cpuid.ADX:true feature.node.kubernetes.io/cpu-cpuid.AESNI:true feature.node.kubernetes.io/cpu-cpuid.AVX:true feature.node.kubernetes.io/cpu-cpuid.AVX2:true feature.node.kubernetes.io/cpu-cpuid.AVX512BW:true feature.node.kubernetes.io/cpu-cpuid.AVX512CD:true feature.node.kubernetes.io/cpu-cpuid.AVX512DQ:true feature.node.kubernetes.io/cpu-cpuid.AVX512F:true feature.node.kubernetes.io/cpu-cpuid.AVX512VL:true feature.node.kubernetes.io/cpu-cpuid.FMA3:true feature.node.kubernetes.io/cpu-cpuid.HLE:true feature.node.kubernetes.io/cpu-cpuid.IBPB:true feature.node.kubernetes.io/cpu-cpuid.MPX:true feature.node.kubernetes.io/cpu-cpuid.RTM:true feature.node.kubernetes.io/cpu-cpuid.SSE4:true feature.node.kubernetes.io/cpu-cpuid.SSE42:true feature.node.kubernetes.io/cpu-cpuid.STIBP:true feature.node.kubernetes.io/cpu-cpuid.VMX:true feature.node.kubernetes.io/cpu-cstate.enabled:true feature.node.kubernetes.io/cpu-hardware_multithreading:true feature.node.kubernetes.io/cpu-pstate.status:active feature.node.kubernetes.io/cpu-pstate.turbo:true feature.node.kubernetes.io/cpu-rdt.RDTCMT:true feature.node.kubernetes.io/cpu-rdt.RDTL3CA:true feature.node.kubernetes.io/cpu-rdt.RDTMBA:true feature.node.kubernetes.io/cpu-rdt.RDTMBM:true feature.node.kubernetes.io/cpu-rdt.RDTMON:true feature.node.kubernetes.io/kernel-config.NO_HZ:true feature.node.kubernetes.io/kernel-config.NO_HZ_FULL:true feature.node.kubernetes.io/kernel-selinux.enabled:true feature.node.kubernetes.io/kernel-version.full:3.10.0-1160.62.1.el7.x86_64 feature.node.kubernetes.io/kernel-version.major:3 feature.node.kubernetes.io/kernel-version.minor:10 feature.node.kubernetes.io/kernel-version.revision:0 feature.node.kubernetes.io/memory-numa:true feature.node.kubernetes.io/network-sriov.capable:true feature.node.kubernetes.io/network-sriov.configured:true feature.node.kubernetes.io/pci-0300_1a03.present:true feature.node.kubernetes.io/storage-nonrotationaldisk:true feature.node.kubernetes.io/system-os_release.ID:centos feature.node.kubernetes.io/system-os_release.VERSION_ID:7 feature.node.kubernetes.io/system-os_release.VERSION_ID.major:7 kubernetes.io/arch:amd64 kubernetes.io/hostname:node2 kubernetes.io/os:linux] map[flannel.alpha.coreos.com/backend-data:null flannel.alpha.coreos.com/backend-type:host-gw flannel.alpha.coreos.com/kube-subnet-manager:true flannel.alpha.coreos.com/public-ip:10.10.190.208 kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock nfd.node.kubernetes.io/extended-resources: nfd.node.kubernetes.io/feature-labels:cpu-cpuid.ADX,cpu-cpuid.AESNI,cpu-cpuid.AVX,cpu-cpuid.AVX2,cpu-cpuid.AVX512BW,cpu-cpuid.AVX512CD,cpu-cpuid.AVX512DQ,cpu-cpuid.AVX512F,cpu-cpuid.AVX512VL,cpu-cpuid.FMA3,cpu-cpuid.HLE,cpu-cpuid.IBPB,cpu-cpuid.MPX,cpu-cpuid.RTM,cpu-cpuid.SSE4,cpu-cpuid.SSE42,cpu-cpuid.STIBP,cpu-cpuid.VMX,cpu-cstate.enabled,cpu-hardware_multithreading,cpu-pstate.status,cpu-pstate.turbo,cpu-rdt.RDTCMT,cpu-rdt.RDTL3CA,cpu-rdt.RDTMBA,cpu-rdt.RDTMBM,cpu-rdt.RDTMON,kernel-config.NO_HZ,kernel-config.NO_HZ_FULL,kernel-selinux.enabled,kernel-version.full,kernel-version.major,kernel-version.minor,kernel-version.revision,memory-numa,network-sriov.capable,network-sriov.configured,pci-0300_1a03.present,storage-nonrotationaldisk,system-os_release.ID,system-os_release.VERSION_ID,system-os_release.VERSION_ID.major nfd.node.kubernetes.io/worker.version:v0.8.2 node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kube-controller-manager Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.4.0/24\"":{}}}}} {kubeadm Update v1 2022-05-06 20:09:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} {flanneld Update v1 2022-05-06 20:10:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:flannel.alpha.coreos.com/backend-data":{},"f:flannel.alpha.coreos.com/backend-type":{},"f:flannel.alpha.coreos.com/kube-subnet-manager":{},"f:flannel.alpha.coreos.com/public-ip":{}}},"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}}} {nfd-master Update v1 2022-05-06 20:18:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:nfd.node.kubernetes.io/extended-resources":{},"f:nfd.node.kubernetes.io/feature-labels":{},"f:nfd.node.kubernetes.io/worker.version":{}},"f:labels":{"f:feature.node.kubernetes.io/cpu-cpuid.ADX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AESNI":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX2":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512BW":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512CD":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512DQ":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512F":{},"f:feature.node.kubernetes.io/cpu-cpuid.AVX512VL":{},"f:feature.node.kubernetes.io/cpu-cpuid.FMA3":{},"f:feature.node.kubernetes.io/cpu-cpuid.HLE":{},"f:feature.node.kubernetes.io/cpu-cpuid.IBPB":{},"f:feature.node.kubernetes.io/cpu-cpuid.MPX":{},"f:feature.node.kubernetes.io/cpu-cpuid.RTM":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE4":{},"f:feature.node.kubernetes.io/cpu-cpuid.SSE42":{},"f:feature.node.kubernetes.io/cpu-cpuid.STIBP":{},"f:feature.node.kubernetes.io/cpu-cpuid.VMX":{},"f:feature.node.kubernetes.io/cpu-cstate.enabled":{},"f:feature.node.kubernetes.io/cpu-hardware_multithreading":{},"f:feature.node.kubernetes.io/cpu-pstate.status":{},"f:feature.node.kubernetes.io/cpu-pstate.turbo":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTCMT":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTL3CA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBA":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMBM":{},"f:feature.node.kubernetes.io/cpu-rdt.RDTMON":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ":{},"f:feature.node.kubernetes.io/kernel-config.NO_HZ_FULL":{},"f:feature.node.kubernetes.io/kernel-selinux.enabled":{},"f:feature.node.kubernetes.io/kernel-version.full":{},"f:feature.node.kubernetes.io/kernel-version.major":{},"f:feature.node.kubernetes.io/kernel-version.minor":{},"f:feature.node.kubernetes.io/kernel-version.revision":{},"f:feature.node.kubernetes.io/memory-numa":{},"f:feature.node.kubernetes.io/network-sriov.capable":{},"f:feature.node.kubernetes.io/network-sriov.configured":{},"f:feature.node.kubernetes.io/pci-0300_1a03.present":{},"f:feature.node.kubernetes.io/storage-nonrotationaldisk":{},"f:feature.node.kubernetes.io/system-os_release.ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID":{},"f:feature.node.kubernetes.io/system-os_release.VERSION_ID.major":{}}}}} {Swagger-Codegen Update v1 2022-05-06 20:21:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{"f:cmk.intel.com/cmk-node":{}}},"f:status":{"f:capacity":{"f:cmk.intel.com/exclusive-cores":{}}}}} {kubelet Update v1 2022-05-06 20:22:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}},"f:status":{"f:allocatable":{"f:cmk.intel.com/exclusive-cores":{},"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:capacity":{"f:ephemeral-storage":{},"f:intel.com/intel_sriov_netdevice":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}}}]},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{80 0} {} 80 DecimalSI},ephemeral-storage: {{451201003520 0} {} 440625980Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{201269604352 0} {} 196552348Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cmk.intel.com/exclusive-cores: {{3 0} {} 3 DecimalSI},cpu: {{77 0} {} 77 DecimalSI},ephemeral-storage: {{406080902496 0} {} 406080902496 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{21474836480 0} {} 20Gi BinarySI},intel.com/intel_sriov_netdevice: {{4 0} {} 4 DecimalSI},memory: {{178884603904 0} {} 174691996Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-05-06 20:13:27 +0000 UTC,LastTransitionTime:2022-05-06 20:13:27 +0000 UTC,Reason:FlannelIsUp,Message:Flannel is running on this node,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:07 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:07 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-05-06 22:17:07 +0000 UTC,LastTransitionTime:2022-05-06 20:09:17 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-05-06 22:17:07 +0000 UTC,LastTransitionTime:2022-05-06 20:10:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.10.190.208,},NodeAddress{Type:Hostname,Address:node2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c77ab26e59394c64a4d3ca530c1cefb5,SystemUUID:80B3CD56-852F-E711-906E-0017A4403562,BootID:0fe5c664-0bc1-49bd-8b38-c77825eebe76,KernelVersion:3.10.0-1160.62.1.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://20.10.15,KubeletVersion:v1.21.1,KubeProxyVersion:v1.21.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[opnfv/barometer-collectd@sha256:f30e965aa6195e6ac4ca2410f5a15e3704c92e4afa5208178ca22a7911975d66],SizeBytes:1075575763,},ContainerImage{Names:[cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[localhost:30500/cmk@sha256:1d76f40bb2f63da16ecddd2971faaf5832a37178bcd40f0f8b0f2d7210829a17 localhost:30500/cmk:v1.5.1],SizeBytes:727675348,},ContainerImage{Names:[centos/python-36-centos7@sha256:ac50754646f0d37616515fb30467d8743fb12954260ec36c9ecb5a94499447e0 centos/python-36-centos7:latest],SizeBytes:650061677,},ContainerImage{Names:[sirot/netperf-latest@sha256:23929b922bb077cb341450fc1c90ae42e6f75da5d7ce61cd1880b08560a5ff85 sirot/netperf-latest:latest],SizeBytes:282025213,},ContainerImage{Names:[nfvpe/multus@sha256:ac1266b87ba44c09dc2a336f0d5dad968fccd389ce1944a85e87b32cd21f7224 nfvpe/multus:v3.4.2],SizeBytes:276587882,},ContainerImage{Names:[kubernetesui/dashboard-amd64@sha256:b9217b835cdcb33853f50a9cf13617ee0f8b887c508c5ac5110720de154914e4 kubernetesui/dashboard-amd64:v2.2.0],SizeBytes:225135791,},ContainerImage{Names:[nginx@sha256:a05b0cdd4fc1be3b224ba9662ebdf98fe44c09c0c9215b45f84344c12867002e nginx:1.21.1],SizeBytes:133175493,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:53af05c2a6cddd32cebf5856f71994f5d41ef2a62824b87f140f2087f91e4a38 k8s.gcr.io/kube-proxy:v1.21.1],SizeBytes:130788187,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 k8s.gcr.io/e2e-test-images/agnhost:2.32],SizeBytes:125930239,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:53a13cd1588391888c5a8ac4cef13d3ee6d229cd904038936731af7131d193a9 k8s.gcr.io/kube-apiserver:v1.21.1],SizeBytes:125612423,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50 k8s.gcr.io/e2e-test-images/httpd:2.4.38-1],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:1f36a24cfb5e0c3f725d7565a867c2384282fcbeccc77b07b423c9da95763a9a k8s.gcr.io/e2e-test-images/nautilus:1.4],SizeBytes:121748345,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:3daf9c9f9fe24c3a7b92ce864ef2d8d610c84124cc7d98e68fdbe94038337228 k8s.gcr.io/kube-controller-manager:v1.21.1],SizeBytes:119825302,},ContainerImage{Names:[k8s.gcr.io/nfd/node-feature-discovery@sha256:74a1cbd82354f148277e20cdce25d57816e355a896bc67f67a0f722164b16945 k8s.gcr.io/nfd/node-feature-discovery:v0.8.2],SizeBytes:108486428,},ContainerImage{Names:[quay.io/coreos/flannel@sha256:34860ea294a018d392e61936f19a7862d5e92039d196cac9176da14b2bbd0fe3 quay.io/coreos/flannel:v0.13.0-amd64],SizeBytes:57156911,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:a8c4084db3b381f0806ea563c7ec842cc3604c57722a916c91fb59b00ff67d63 k8s.gcr.io/kube-scheduler:v1.21.1],SizeBytes:50635642,},ContainerImage{Names:[quay.io/brancz/kube-rbac-proxy@sha256:05e15e1164fd7ac85f5702b3f87ef548f4e00de3a79e6c4a6a34c92035497a9a quay.io/brancz/kube-rbac-proxy:v0.8.0],SizeBytes:48952053,},ContainerImage{Names:[quay.io/coreos/kube-rbac-proxy@sha256:e10d1d982dd653db74ca87a1d1ad017bc5ef1aeb651bdea089debf16485b080b quay.io/coreos/kube-rbac-proxy:v0.5.0],SizeBytes:46626428,},ContainerImage{Names:[localhost:30500/sriov-device-plugin@sha256:07ca00a3e221b8c85c70fc80bf770768db15bb7d656065369d9fd4f6adbe838b localhost:30500/sriov-device-plugin:v3.3.2],SizeBytes:42676189,},ContainerImage{Names:[kubernetesui/metrics-scraper@sha256:1f977343873ed0e2efd4916a6b2f3075f310ff6fe42ee098f54fc58aa7a28ab7 kubernetesui/metrics-scraper:v1.0.6],SizeBytes:34548789,},ContainerImage{Names:[localhost:30500/tasextender@sha256:1be4cb48d285cf30ab1959a41fa671166a04224264f6465807209a699f066656 localhost:30500/tasextender:0.4],SizeBytes:28910791,},ContainerImage{Names:[quay.io/prometheus/node-exporter@sha256:cf66a6bbd573fd819ea09c72e21b528e9252d58d01ae13564a29749de1e48e0f quay.io/prometheus/node-exporter:v1.0.1],SizeBytes:26430341,},ContainerImage{Names:[aquasec/kube-bench@sha256:3544f6662feb73d36fdba35b17652e2fd73aae45bd4b60e76d7ab928220b3cc6 aquasec/kube-bench:0.3.1],SizeBytes:19301876,},ContainerImage{Names:[prom/collectd-exporter@sha256:73fbda4d24421bff3b741c27efc36f1b6fbe7c57c378d56d4ff78101cd556654],SizeBytes:17463681,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:503b7abb89e57383eba61cc8a9cb0b495ea575c516108f7d972a6ff6e1ab3c9b k8s.gcr.io/e2e-test-images/nginx:1.14-1],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[appropriate/curl@sha256:027a0ad3c69d085fea765afca9984787b780c172cead6502fec989198b98d8bb appropriate/curl:edge],SizeBytes:5654234,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592 k8s.gcr.io/e2e-test-images/busybox:1.29-1],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47 busybox:1.28],SizeBytes:1146369,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:6c3835cab3980f11b83277305d0d736051c32b17606f5ec59f1dda67c9ba3810 k8s.gcr.io/pause:3.4.1],SizeBytes:682696,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:a319ac2280eb7e3a59e252e54b76327cb4a33cf8389053b0d78277f22bbca2fa k8s.gcr.io/pause:3.3],SizeBytes:682696,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 6 22:17:07.737: INFO: Logging kubelet events for node node2 May 6 22:17:07.739: INFO: Logging pods the kubelet thinks is on node node2 May 6 22:17:07.754: INFO: node-feature-discovery-worker-8phhs started at 2022-05-06 20:17:54 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.754: INFO: Container nfd-worker ready: true, restart count 0 May 6 22:17:07.754: INFO: tas-telemetry-aware-scheduling-84ff454dfb-kb2t7 started at 2022-05-06 20:26:21 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.754: INFO: Container tas-extender ready: true, restart count 0 May 6 22:17:07.754: INFO: test-rs-7j7wx started at 2022-05-06 22:17:01 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.754: INFO: Container httpd ready: true, restart count 0 May 6 22:17:07.754: INFO: kube-multus-ds-amd64-gtzj9 started at 2022-05-06 20:10:25 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.754: INFO: Container kube-multus ready: true, restart count 1 May 6 22:17:07.754: INFO: cmk-cb5rv started at 2022-05-06 20:22:17 +0000 UTC (0+2 container statuses recorded) May 6 22:17:07.754: INFO: Container nodereport ready: true, restart count 0 May 6 22:17:07.754: INFO: Container reconcile ready: true, restart count 0 May 6 22:17:07.754: INFO: ss2-0 started at 2022-05-06 22:17:05 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.754: INFO: Container webserver ready: false, restart count 0 May 6 22:17:07.754: INFO: kube-proxy-g77fj started at 2022-05-06 20:09:20 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container kube-proxy ready: true, restart count 2 May 6 22:17:07.755: INFO: test-rs-lbfbv started at 2022-05-06 22:17:06 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container httpd ready: false, restart count 0 May 6 22:17:07.755: INFO: collectd-mbz88 started at 2022-05-06 20:27:12 +0000 UTC (0+3 container statuses recorded) May 6 22:17:07.755: INFO: Container collectd ready: true, restart count 0 May 6 22:17:07.755: INFO: Container collectd-exporter ready: true, restart count 0 May 6 22:17:07.755: INFO: Container rbac-proxy ready: true, restart count 0 May 6 22:17:07.755: INFO: externalname-service-gqgjx started at 2022-05-06 22:14:52 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container externalname-service ready: true, restart count 0 May 6 22:17:07.755: INFO: cmk-init-discover-node2-kt2nj started at 2022-05-06 20:21:53 +0000 UTC (0+3 container statuses recorded) May 6 22:17:07.755: INFO: Container discover ready: false, restart count 0 May 6 22:17:07.755: INFO: Container init ready: false, restart count 0 May 6 22:17:07.755: INFO: Container install ready: false, restart count 0 May 6 22:17:07.755: INFO: node-exporter-4xqmj started at 2022-05-06 20:23:20 +0000 UTC (0+2 container statuses recorded) May 6 22:17:07.755: INFO: Container kube-rbac-proxy ready: true, restart count 0 May 6 22:17:07.755: INFO: Container node-exporter ready: true, restart count 0 May 6 22:17:07.755: INFO: kubernetes-metrics-scraper-5558854cb-4ztpz started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container kubernetes-metrics-scraper ready: true, restart count 1 May 6 22:17:07.755: INFO: kubernetes-dashboard-785dcbb76d-29wg6 started at 2022-05-06 20:10:56 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container kubernetes-dashboard ready: true, restart count 2 May 6 22:17:07.755: INFO: sriov-net-dp-kube-sriov-device-plugin-amd64-6rd2h started at 2022-05-06 20:19:12 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container kube-sriovdp ready: true, restart count 0 May 6 22:17:07.755: INFO: cmk-webhook-6c9d5f8578-vllpr started at 2022-05-06 20:22:17 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container cmk-webhook ready: true, restart count 0 May 6 22:17:07.755: INFO: ss2-1 started at 2022-05-06 22:16:18 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container webserver ready: true, restart count 0 May 6 22:17:07.755: INFO: nginx-proxy-node2 started at 2022-05-06 20:09:17 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container nginx-proxy ready: true, restart count 2 May 6 22:17:07.755: INFO: kube-flannel-ffwfn started at 2022-05-06 20:10:16 +0000 UTC (1+1 container statuses recorded) May 6 22:17:07.755: INFO: Init container install-cni ready: true, restart count 1 May 6 22:17:07.755: INFO: Container kube-flannel ready: true, restart count 2 May 6 22:17:07.755: INFO: execpodwth54 started at 2022-05-06 22:14:58 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container agnhost-container ready: true, restart count 0 May 6 22:17:07.755: INFO: my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910-5rrjq started at 2022-05-06 22:17:07 +0000 UTC (0+1 container statuses recorded) May 6 22:17:07.755: INFO: Container my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910 ready: false, restart count 0 May 6 22:17:09.153: INFO: Latency metrics for node node2 May 6 22:17:09.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2135" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750 • Failure [136.604 seconds] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] [It] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:17:04.936: Unexpected error: <*errors.errorString | 0xc000b666d0>: { s: "service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32117 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint 10.10.190.207:32117 over TCP protocol occurred /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 ------------------------------ {"msg":"FAILED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":28,"skipped":480,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:51.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod pod-subpath-test-projected-dc72 STEP: Creating a pod to test atomic-volume-subpath May 6 22:16:52.010: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-dc72" in namespace "subpath-5546" to be "Succeeded or Failed" May 6 22:16:52.014: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113256ms May 6 22:16:54.018: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00726236s May 6 22:16:56.023: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 4.012492161s May 6 22:16:58.026: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 6.01579917s May 6 22:17:00.030: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 8.019955555s May 6 22:17:02.035: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 10.024250767s May 6 22:17:04.040: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 12.029702951s May 6 22:17:06.047: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 14.036127816s May 6 22:17:08.050: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 16.039975473s May 6 22:17:10.055: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 18.044371189s May 6 22:17:12.059: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 20.048548579s May 6 22:17:14.066: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Running", Reason="", readiness=true. Elapsed: 22.055224098s May 6 22:17:16.070: INFO: Pod "pod-subpath-test-projected-dc72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059321895s STEP: Saw pod success May 6 22:17:16.070: INFO: Pod "pod-subpath-test-projected-dc72" satisfied condition "Succeeded or Failed" May 6 22:17:16.072: INFO: Trying to get logs from node node1 pod pod-subpath-test-projected-dc72 container test-container-subpath-projected-dc72: STEP: delete the pod May 6 22:17:16.085: INFO: Waiting for pod pod-subpath-test-projected-dc72 to disappear May 6 22:17:16.087: INFO: Pod pod-subpath-test-projected-dc72 no longer exists STEP: Deleting pod pod-subpath-test-projected-dc72 May 6 22:17:16.087: INFO: Deleting pod "pod-subpath-test-projected-dc72" in namespace "subpath-5546" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:16.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5546" for this suite. • [SLOW TEST:24.128 seconds] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":124,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:06.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating replication controller my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910 May 6 22:17:07.008: INFO: Pod name my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910: Found 0 pods out of 1 May 6 22:17:12.019: INFO: Pod name my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910: Found 1 pods out of 1 May 6 22:17:12.019: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910" are running May 6 22:17:12.022: INFO: Pod "my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910-5rrjq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 22:17:07 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 22:17:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 22:17:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-05-06 22:17:07 +0000 UTC Reason: Message:}]) May 6 22:17:12.023: INFO: Trying to dial the pod May 6 22:17:17.041: INFO: Controller my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910: Got expected result from replica 1 [my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910-5rrjq]: "my-hostname-basic-3b713f11-20cf-467e-81e5-21f6aa6f4910-5rrjq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:17.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8892" for this suite. • [SLOW TEST:10.069 seconds] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":24,"skipped":460,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:16.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 6 22:17:16.198: INFO: Waiting up to 5m0s for pod "downward-api-88cc1cbd-70d1-4980-a0cd-9a29082fa9dc" in namespace "downward-api-1806" to be "Succeeded or Failed" May 6 22:17:16.200: INFO: Pod "downward-api-88cc1cbd-70d1-4980-a0cd-9a29082fa9dc": Phase="Pending", Reason="", readiness=false. Elapsed: 1.843601ms May 6 22:17:18.203: INFO: Pod "downward-api-88cc1cbd-70d1-4980-a0cd-9a29082fa9dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005288533s May 6 22:17:20.208: INFO: Pod "downward-api-88cc1cbd-70d1-4980-a0cd-9a29082fa9dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010446111s STEP: Saw pod success May 6 22:17:20.208: INFO: Pod "downward-api-88cc1cbd-70d1-4980-a0cd-9a29082fa9dc" satisfied condition "Succeeded or Failed" May 6 22:17:20.212: INFO: Trying to get logs from node node2 pod downward-api-88cc1cbd-70d1-4980-a0cd-9a29082fa9dc container dapi-container: STEP: delete the pod May 6 22:17:20.228: INFO: Waiting for pod downward-api-88cc1cbd-70d1-4980-a0cd-9a29082fa9dc to disappear May 6 22:17:20.230: INFO: Pod downward-api-88cc1cbd-70d1-4980-a0cd-9a29082fa9dc no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:20.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1806" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":161,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:13:17.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-d589a3da-337d-49b5-914f-9d5a5b28f502 in namespace container-probe-978 May 6 22:13:23.731: INFO: Started pod busybox-d589a3da-337d-49b5-914f-9d5a5b28f502 in namespace container-probe-978 STEP: checking the pod's current state and verifying that restartCount is present May 6 22:13:23.733: INFO: Initial restart count of pod busybox-d589a3da-337d-49b5-914f-9d5a5b28f502 is 0 STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:24.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-978" for this suite. • [SLOW TEST:246.576 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":696,"failed":0} SSSSSS ------------------------------ [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:24.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:24.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6805" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":46,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:09.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:25.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2987" for this suite. • [SLOW TEST:16.114 seconds] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":29,"skipped":498,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSS ------------------------------ [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:24.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 22:17:24.413: INFO: Waiting up to 5m0s for pod "pod-2a9009a2-d491-4330-8181-b099f97a10ff" in namespace "emptydir-2520" to be "Succeeded or Failed" May 6 22:17:24.415: INFO: Pod "pod-2a9009a2-d491-4330-8181-b099f97a10ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272004ms May 6 22:17:26.419: INFO: Pod "pod-2a9009a2-d491-4330-8181-b099f97a10ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005698866s May 6 22:17:28.422: INFO: Pod "pod-2a9009a2-d491-4330-8181-b099f97a10ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009033281s STEP: Saw pod success May 6 22:17:28.422: INFO: Pod "pod-2a9009a2-d491-4330-8181-b099f97a10ff" satisfied condition "Succeeded or Failed" May 6 22:17:28.424: INFO: Trying to get logs from node node1 pod pod-2a9009a2-d491-4330-8181-b099f97a10ff container test-container: STEP: delete the pod May 6 22:17:28.438: INFO: Waiting for pod pod-2a9009a2-d491-4330-8181-b099f97a10ff to disappear May 6 22:17:28.440: INFO: Pod pod-2a9009a2-d491-4330-8181-b099f97a10ff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:28.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2520" for this suite. • ------------------------------ [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:20.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating secret with name s-test-opt-del-af4ef3ef-4053-41c5-b412-b9cffb64f600 STEP: Creating secret with name s-test-opt-upd-cf22b149-70ec-4be7-8a68-feb39284cf27 STEP: Creating the pod May 6 22:17:20.349: INFO: The status of Pod pod-secrets-4e3102de-4261-4ed8-b21a-b18865262930 is Pending, waiting for it to be Running (with Ready = true) May 6 22:17:22.353: INFO: The status of Pod pod-secrets-4e3102de-4261-4ed8-b21a-b18865262930 is Pending, waiting for it to be Running (with Ready = true) May 6 22:17:24.352: INFO: The status of Pod pod-secrets-4e3102de-4261-4ed8-b21a-b18865262930 is Running (Ready = true) STEP: Deleting secret s-test-opt-del-af4ef3ef-4053-41c5-b412-b9cffb64f600 STEP: Updating secret s-test-opt-upd-cf22b149-70ec-4be7-8a68-feb39284cf27 STEP: Creating secret with name s-test-opt-create-c5a62030-3dc4-4c93-b829-ed637ec496ee STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:28.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5714" for this suite. • [SLOW TEST:8.150 seconds] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":196,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:17.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:17:17.092: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 22:17:25.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2173 --namespace=crd-publish-openapi-2173 create -f -' May 6 22:17:26.237: INFO: stderr: "" May 6 22:17:26.237: INFO: stdout: "e2e-test-crd-publish-openapi-3364-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 6 22:17:26.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2173 --namespace=crd-publish-openapi-2173 delete e2e-test-crd-publish-openapi-3364-crds test-cr' May 6 22:17:26.434: INFO: stderr: "" May 6 22:17:26.434: INFO: stdout: "e2e-test-crd-publish-openapi-3364-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 6 22:17:26.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2173 --namespace=crd-publish-openapi-2173 apply -f -' May 6 22:17:26.783: INFO: stderr: "" May 6 22:17:26.783: INFO: stdout: "e2e-test-crd-publish-openapi-3364-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 6 22:17:26.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2173 --namespace=crd-publish-openapi-2173 delete e2e-test-crd-publish-openapi-3364-crds test-cr' May 6 22:17:26.963: INFO: stderr: "" May 6 22:17:26.963: INFO: stdout: "e2e-test-crd-publish-openapi-3364-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 6 22:17:26.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2173 explain e2e-test-crd-publish-openapi-3364-crds' May 6 22:17:27.335: INFO: stderr: "" May 6 22:17:27.335: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3364-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:30.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2173" for this suite. • [SLOW TEST:13.947 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":25,"skipped":468,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSS ------------------------------ [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:31.026: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename disruption STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pdb STEP: Waiting for the pdb to be processed STEP: updating the pdb STEP: Waiting for the pdb to be processed STEP: patching the pdb STEP: Waiting for the pdb to be processed STEP: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:33.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "disruption-4619" for this suite. • ------------------------------ {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":26,"skipped":472,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSS ------------------------------ [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:28.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: creating the pod May 6 22:17:28.512: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:34.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8601" for this suite. • [SLOW TEST:5.952 seconds] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":13,"skipped":223,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:46.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating pod busybox-3bb61db8-5678-4050-9203-125f691b2462 in namespace container-probe-827 May 6 22:16:50.428: INFO: Started pod busybox-3bb61db8-5678-4050-9203-125f691b2462 in namespace container-probe-827 STEP: checking the pod's current state and verifying that restartCount is present May 6 22:16:50.430: INFO: Initial restart count of pod busybox-3bb61db8-5678-4050-9203-125f691b2462 is 0 May 6 22:17:40.556: INFO: Restart count of pod container-probe-827/busybox-3bb61db8-5678-4050-9203-125f691b2462 is now 1 (50.125159948s elapsed) STEP: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:40.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-827" for this suite. • [SLOW TEST:54.187 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":505,"failed":0} S ------------------------------ [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:33.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:17:33.136: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 6 22:17:38.139: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 22:17:38.140: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:80 May 6 22:17:42.162: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5528 a6ccd689-f45e-425e-8fbe-22bb7aec07de 49604 1 2022-05-06 22:17:38 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-05-06 22:17:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2022-05-06 22:17:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0089290c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-05-06 22:17:38 +0000 UTC,LastTransitionTime:2022-05-06 22:17:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2022-05-06 22:17:41 +0000 UTC,LastTransitionTime:2022-05-06 22:17:38 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 22:17:42.164: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-5528 36a98d65-ea5d-4a94-8f2d-fba610290435 49593 1 2022-05-06 22:17:38 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a6ccd689-f45e-425e-8fbe-22bb7aec07de 0xc008950757 0xc008950758}] [] [{kube-controller-manager Update apps/v1 2022-05-06 22:17:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a6ccd689-f45e-425e-8fbe-22bb7aec07de\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0089507e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] nil [] map[] [] }},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 22:17:42.168: INFO: Pod "test-cleanup-deployment-5b4d99b59b-zkdbk" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-zkdbk test-cleanup-deployment-5b4d99b59b- deployment-5528 1aa5382a-45d3-444f-9e33-f20b6f5e8cb7 49592 0 2022-05-06 22:17:38 +0000 UTC map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[k8s.v1.cni.cncf.io/network-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.159" ], "mac": "6a:21:d5:96:fd:74", "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status:[{ "name": "default-cni-network", "interface": "eth0", "ips": [ "10.244.4.159" ], "mac": "6a:21:d5:96:fd:74", "default": true, "dns": {} }] kubernetes.io/psp:collectd] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b 36a98d65-ea5d-4a94-8f2d-fba610290435 0xc00888c6af 0xc00888c6c0}] [] [{kube-controller-manager Update v1 2022-05-06 22:17:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"36a98d65-ea5d-4a94-8f2d-fba610290435\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {multus Update v1 2022-05-06 22:17:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:k8s.v1.cni.cncf.io/network-status":{},"f:k8s.v1.cni.cncf.io/networks-status":{}}}}} {kubelet Update v1 2022-05-06 22:17:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.4.159\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lvzj5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lvzj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:node2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:17:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:17:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:17:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-05-06 22:17:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.10.190.208,PodIP:10.244.4.159,StartTime:2022-05-06 22:17:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-05-06 22:17:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:docker-pullable://k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:docker://668f08b3aa2c8537d44ab527964dbb9c3ee659a093cfcb014644a5a7888a42f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.4.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:42.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5528" for this suite. • [SLOW TEST:9.070 seconds] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":27,"skipped":479,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:42.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:17:42.248: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-242a737b-9aa3-47a2-8d4f-72ff83de04bd" in namespace "security-context-test-2470" to be "Succeeded or Failed" May 6 22:17:42.250: INFO: Pod "busybox-privileged-false-242a737b-9aa3-47a2-8d4f-72ff83de04bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008889ms May 6 22:17:44.253: INFO: Pod "busybox-privileged-false-242a737b-9aa3-47a2-8d4f-72ff83de04bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005007792s May 6 22:17:46.257: INFO: Pod "busybox-privileged-false-242a737b-9aa3-47a2-8d4f-72ff83de04bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00894767s May 6 22:17:46.257: INFO: Pod "busybox-privileged-false-242a737b-9aa3-47a2-8d4f-72ff83de04bd" satisfied condition "Succeeded or Failed" May 6 22:17:46.263: INFO: Got logs for pod "busybox-privileged-false-242a737b-9aa3-47a2-8d4f-72ff83de04bd": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:46.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2470" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":495,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:25.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:17:25.374: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Pending, waiting for it to be Running (with Ready = true) May 6 22:17:27.377: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Pending, waiting for it to be Running (with Ready = true) May 6 22:17:29.384: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:31.378: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:33.378: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:35.379: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:37.379: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:39.377: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:41.378: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:43.378: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:45.379: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = false) May 6 22:17:47.378: INFO: The status of Pod test-webserver-65f7a9ba-97a0-4334-9732-bc93b787313b is Running (Ready = true) May 6 22:17:47.383: INFO: Container started at 2022-05-06 22:17:28 +0000 UTC, pod became ready at 2022-05-06 22:17:45 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:47.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1518" for this suite. • [SLOW TEST:22.051 seconds] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":508,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:34.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 May 6 22:17:34.486: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Registering the sample API server. May 6 22:17:35.017: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 6 22:17:37.049: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:17:39.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:17:41.053: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:17:43.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:17:45.054: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472255, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-64f6b9dc99\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:17:48.774: INFO: Waited 1.712844174s for the sample-apiserver to be ready to handle requests. STEP: Read Status for v1alpha1.wardle.example.com STEP: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' STEP: List APIServices May 6 22:17:49.329: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:50.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-5338" for this suite. • [SLOW TEST:15.755 seconds] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":14,"skipped":234,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} SSSSSS ------------------------------ [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:47.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test downward api env vars May 6 22:17:47.463: INFO: Waiting up to 5m0s for pod "downward-api-2adf98ed-313a-4b65-9940-7d82e43671c0" in namespace "downward-api-1903" to be "Succeeded or Failed" May 6 22:17:47.469: INFO: Pod "downward-api-2adf98ed-313a-4b65-9940-7d82e43671c0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14635ms May 6 22:17:49.473: INFO: Pod "downward-api-2adf98ed-313a-4b65-9940-7d82e43671c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009826244s May 6 22:17:51.478: INFO: Pod "downward-api-2adf98ed-313a-4b65-9940-7d82e43671c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014877755s STEP: Saw pod success May 6 22:17:51.478: INFO: Pod "downward-api-2adf98ed-313a-4b65-9940-7d82e43671c0" satisfied condition "Succeeded or Failed" May 6 22:17:51.481: INFO: Trying to get logs from node node2 pod downward-api-2adf98ed-313a-4b65-9940-7d82e43671c0 container dapi-container: STEP: delete the pod May 6 22:17:51.492: INFO: Waiting for pod downward-api-2adf98ed-313a-4b65-9940-7d82e43671c0 to disappear May 6 22:17:51.494: INFO: Pod downward-api-2adf98ed-313a-4b65-9940-7d82e43671c0 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:51.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1903" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":531,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:46.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename endpointslicemirroring STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: mirroring a new custom Endpoint May 6 22:17:46.335: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 STEP: mirroring an update to a custom Endpoint May 6 22:17:48.344: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 STEP: mirroring deletion of a custom Endpoint May 6 22:17:50.353: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:52.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "endpointslicemirroring-3597" for this suite. • [SLOW TEST:6.063 seconds] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23 should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":29,"skipped":509,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:53.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0506 22:12:53.299977 31 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a suspended cronjob STEP: Ensuring no jobs are scheduled STEP: Ensuring no job exists by listing jobs explicitly STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:53.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-8767" for this suite. • [SLOW TEST:300.053 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":14,"skipped":300,"failed":0} SSSSSSSSSSS ------------------------------ [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:52.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:17:52.427: INFO: Got root ca configmap in namespace "svcaccounts-930" May 6 22:17:52.431: INFO: Deleted root ca configmap in namespace "svcaccounts-930" STEP: waiting for a new root ca configmap created May 6 22:17:52.934: INFO: Recreated root ca configmap in namespace "svcaccounts-930" May 6 22:17:52.939: INFO: Updated root ca configmap in namespace "svcaccounts-930" STEP: waiting for the root ca configmap reconciled May 6 22:17:53.442: INFO: Reconciled root ca configmap in namespace "svcaccounts-930" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:53.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-930" for this suite. • ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":30,"skipped":529,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} SS ------------------------------ [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:40.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:17:40.946: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:17:42.957: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472260, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472260, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472260, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472260, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:17:45.968: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:17:45.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3792-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:54.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3125" for this suite. STEP: Destroying namespace "webhook-3125-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.557 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":39,"skipped":506,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ May 6 22:17:54.206: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:51.613: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 22:17:54.659: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:54.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4208" for this suite. • ------------------------------ [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:47.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:16:47.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 6 22:16:55.503: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-06T22:16:55Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-06T22:16:55Z]] name:name1 resourceVersion:48697 uid:710f7bc9-7e7f-4e31-bda4-6382c92b4c8d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 6 22:17:05.510: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-06T22:17:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-06T22:17:05Z]] name:name2 resourceVersion:48825 uid:45140c22-6dc2-4b67-abba-b6225a475c13] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 6 22:17:15.517: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-06T22:16:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-06T22:17:15Z]] name:name1 resourceVersion:48990 uid:710f7bc9-7e7f-4e31-bda4-6382c92b4c8d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 6 22:17:25.522: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-06T22:17:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-06T22:17:25Z]] name:name2 resourceVersion:49180 uid:45140c22-6dc2-4b67-abba-b6225a475c13] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 6 22:17:35.527: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-06T22:16:55Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-06T22:17:15Z]] name:name1 resourceVersion:49452 uid:710f7bc9-7e7f-4e31-bda4-6382c92b4c8d] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 6 22:17:45.533: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2022-05-06T22:17:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2022-05-06T22:17:25Z]] name:name2 resourceVersion:49648 uid:45140c22-6dc2-4b67-abba-b6225a475c13] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:56.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7711" for this suite. • [SLOW TEST:68.113 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":41,"skipped":740,"failed":1,"failures":["[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} May 6 22:17:56.051: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:53.347: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test substitution in volume subpath May 6 22:17:53.381: INFO: Waiting up to 5m0s for pod "var-expansion-bee6165e-c14d-49e2-b4b9-c872cd9f093e" in namespace "var-expansion-554" to be "Succeeded or Failed" May 6 22:17:53.383: INFO: Pod "var-expansion-bee6165e-c14d-49e2-b4b9-c872cd9f093e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04081ms May 6 22:17:55.388: INFO: Pod "var-expansion-bee6165e-c14d-49e2-b4b9-c872cd9f093e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006781558s May 6 22:17:57.392: INFO: Pod "var-expansion-bee6165e-c14d-49e2-b4b9-c872cd9f093e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010848995s STEP: Saw pod success May 6 22:17:57.392: INFO: Pod "var-expansion-bee6165e-c14d-49e2-b4b9-c872cd9f093e" satisfied condition "Succeeded or Failed" May 6 22:17:57.394: INFO: Trying to get logs from node node2 pod var-expansion-bee6165e-c14d-49e2-b4b9-c872cd9f093e container dapi-container: STEP: delete the pod May 6 22:17:57.407: INFO: Waiting for pod var-expansion-bee6165e-c14d-49e2-b4b9-c872cd9f093e to disappear May 6 22:17:57.409: INFO: Pod var-expansion-bee6165e-c14d-49e2-b4b9-c872cd9f093e no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:57.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-554" for this suite. • ------------------------------ {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":-1,"completed":15,"skipped":311,"failed":0} May 6 22:17:57.419: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:53.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 22:17:53.486: INFO: Waiting up to 5m0s for pod "pod-d05c398c-c9d8-4ca9-bbe4-817497a893f1" in namespace "emptydir-621" to be "Succeeded or Failed" May 6 22:17:53.489: INFO: Pod "pod-d05c398c-c9d8-4ca9-bbe4-817497a893f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.297203ms May 6 22:17:55.492: INFO: Pod "pod-d05c398c-c9d8-4ca9-bbe4-817497a893f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005712533s May 6 22:17:57.496: INFO: Pod "pod-d05c398c-c9d8-4ca9-bbe4-817497a893f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009811751s STEP: Saw pod success May 6 22:17:57.496: INFO: Pod "pod-d05c398c-c9d8-4ca9-bbe4-817497a893f1" satisfied condition "Succeeded or Failed" May 6 22:17:57.498: INFO: Trying to get logs from node node1 pod pod-d05c398c-c9d8-4ca9-bbe4-817497a893f1 container test-container: STEP: delete the pod May 6 22:17:57.512: INFO: Waiting for pod pod-d05c398c-c9d8-4ca9-bbe4-817497a893f1 to disappear May 6 22:17:57.514: INFO: Pod pod-d05c398c-c9d8-4ca9-bbe4-817497a893f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:17:57.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-621" for this suite. • ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":531,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} May 6 22:17:57.523: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:50.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 6 22:17:50.455: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 6 22:17:52.462: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472270, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472270, loc:(*time.Location)(0x9e2e180)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472270, loc:(*time.Location)(0x9e2e180)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63787472270, loc:(*time.Location)(0x9e2e180)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:17:55.474: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 May 6 22:17:55.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:18:03.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3141" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:13.355 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":15,"skipped":240,"failed":2,"failures":["[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} May 6 22:18:03.592: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:16:14.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-3897 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet May 6 22:16:14.709: INFO: Found 0 stateful pods, waiting for 3 May 6 22:16:24.714: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:16:24.714: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:16:24.714: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 22:16:34.713: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:16:34.713: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:16:34.713: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 6 22:16:34.739: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 6 22:16:44.768: INFO: Updating stateful set ss2 May 6 22:16:44.774: INFO: Waiting for Pod statefulset-3897/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 STEP: Restoring Pods to the correct revision when they are deleted May 6 22:16:54.798: INFO: Found 1 stateful pods, waiting for 3 May 6 22:17:04.802: INFO: Found 2 stateful pods, waiting for 3 May 6 22:17:14.803: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:14.803: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:14.803: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 22:17:24.803: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:24.803: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:24.803: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 6 22:17:24.826: INFO: Updating stateful set ss2 May 6 22:17:24.830: INFO: Waiting for Pod statefulset-3897/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 6 22:17:34.853: INFO: Updating stateful set ss2 May 6 22:17:34.859: INFO: Waiting for StatefulSet statefulset-3897/ss2 to complete update May 6 22:17:34.859: INFO: Waiting for Pod statefulset-3897/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 6 22:17:44.866: INFO: Waiting for StatefulSet statefulset-3897/ss2 to complete update May 6 22:17:44.866: INFO: Waiting for Pod statefulset-3897/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 6 22:17:54.865: INFO: Deleting all statefulset in ns statefulset-3897 May 6 22:17:54.867: INFO: Scaling statefulset ss2 to 0 May 6 22:18:24.879: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:18:24.882: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:18:24.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3897" for this suite. • [SLOW TEST:130.223 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":40,"skipped":761,"failed":0} May 6 22:18:24.902: INFO: Running AfterSuite actions on all nodes [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:12:35.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2420 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2420 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2420 May 6 22:12:35.426: INFO: Found 0 stateful pods, waiting for 1 May 6 22:12:45.430: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 6 22:12:45.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:12:45.676: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:12:45.676: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:12:45.676: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:12:45.682: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 22:12:55.686: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 22:12:55.686: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:12:55.697: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.99999947s May 6 22:12:56.701: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.997230617s May 6 22:12:57.704: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.99330031s May 6 22:12:58.709: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.988504881s May 6 22:12:59.711: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.985850742s May 6 22:13:00.714: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.983299517s May 6 22:13:01.717: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.9803315s May 6 22:13:02.719: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.977600387s May 6 22:13:03.723: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.975024753s May 6 22:13:04.726: INFO: Verifying statefulset ss doesn't scale past 1 for another 971.347692ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2420 May 6 22:13:05.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:13:06.298: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 6 22:13:06.298: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:13:06.298: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:13:06.300: INFO: Found 1 stateful pods, waiting for 3 May 6 22:13:16.306: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:13:16.306: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:13:16.306: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 6 22:13:16.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:13:16.567: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:13:16.567: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:13:16.567: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:13:16.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:13:16.841: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:13:16.841: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:13:16.841: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:13:16.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:13:17.091: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:13:17.091: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:13:17.091: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:13:17.091: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:13:17.095: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 6 22:13:27.103: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 22:13:27.103: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 22:13:27.103: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 22:13:27.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999477s May 6 22:13:28.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995664569s May 6 22:13:29.120: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992204203s May 6 22:13:30.125: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.988678996s May 6 22:13:31.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.984713393s May 6 22:13:32.132: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.98180616s May 6 22:13:33.137: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.977409162s May 6 22:13:34.141: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.971924903s May 6 22:13:35.146: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.967793335s May 6 22:13:36.150: INFO: Verifying statefulset ss doesn't scale past 3 for another 963.41548ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2420 May 6 22:13:37.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:13:37.470: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 6 22:13:37.470: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:13:37.470: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:13:37.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:13:37.695: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 6 22:13:37.695: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:13:37.695: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:13:37.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:13:39.045: INFO: rc: 126 May 6 22:13:39.045: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: OCI runtime exec failed: exec failed: container_linux.go:364: creating new parent process caused: container_linux.go:1998: running lstat on namespace path "/proc/64442/ns/ipc" caused: lstat /proc/64442/ns/ipc: no such file or directory: unknown stderr: command terminated with exit code 126 error: exit status 126 May 6 22:13:49.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:13:49.211: INFO: rc: 1 May 6 22:13:49.211: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:13:59.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:13:59.396: INFO: rc: 1 May 6 22:13:59.396: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:14:09.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:14:09.561: INFO: rc: 1 May 6 22:14:09.561: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:14:19.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:14:19.690: INFO: rc: 1 May 6 22:14:19.690: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:14:29.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:14:29.844: INFO: rc: 1 May 6 22:14:29.844: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:14:39.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:14:40.003: INFO: rc: 1 May 6 22:14:40.003: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:14:50.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:14:50.136: INFO: rc: 1 May 6 22:14:50.136: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:15:00.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:15:00.295: INFO: rc: 1 May 6 22:15:00.295: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:15:10.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:15:10.429: INFO: rc: 1 May 6 22:15:10.429: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:15:20.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:15:20.589: INFO: rc: 1 May 6 22:15:20.589: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:15:30.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:15:30.738: INFO: rc: 1 May 6 22:15:30.738: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:15:40.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:15:40.898: INFO: rc: 1 May 6 22:15:40.898: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:15:50.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:15:51.056: INFO: rc: 1 May 6 22:15:51.056: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:16:01.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:16:01.217: INFO: rc: 1 May 6 22:16:01.217: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:16:11.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:16:11.372: INFO: rc: 1 May 6 22:16:11.372: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:16:21.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:16:21.533: INFO: rc: 1 May 6 22:16:21.533: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:16:31.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:16:31.678: INFO: rc: 1 May 6 22:16:31.678: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:16:41.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:16:41.824: INFO: rc: 1 May 6 22:16:41.824: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:16:51.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:16:51.965: INFO: rc: 1 May 6 22:16:51.965: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:17:01.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:17:02.120: INFO: rc: 1 May 6 22:17:02.120: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:17:12.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:17:12.282: INFO: rc: 1 May 6 22:17:12.282: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:17:22.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:17:22.446: INFO: rc: 1 May 6 22:17:22.446: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:17:32.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:17:32.609: INFO: rc: 1 May 6 22:17:32.609: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:17:42.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:17:42.753: INFO: rc: 1 May 6 22:17:42.753: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:17:52.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:17:52.895: INFO: rc: 1 May 6 22:17:52.895: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:18:02.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:18:03.040: INFO: rc: 1 May 6 22:18:03.040: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:18:13.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:18:13.198: INFO: rc: 1 May 6 22:18:13.198: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:18:23.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:18:23.366: INFO: rc: 1 May 6 22:18:23.367: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:18:33.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:18:33.519: INFO: rc: 1 May 6 22:18:33.520: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 May 6 22:18:43.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2420 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:18:43.677: INFO: rc: 1 May 6 22:18:43.677: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: May 6 22:18:43.677: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 6 22:18:43.695: INFO: Deleting all statefulset in ns statefulset-2420 May 6 22:18:43.699: INFO: Scaling statefulset ss to 0 May 6 22:18:43.706: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:18:43.709: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:18:43.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2420" for this suite. • [SLOW TEST:368.333 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":20,"skipped":391,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} May 6 22:18:43.730: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":719,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:17:28.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:105 STEP: Creating service test in namespace statefulset-2558 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a new StatefulSet May 6 22:17:28.484: INFO: Found 0 stateful pods, waiting for 3 May 6 22:17:38.489: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:38.489: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:38.489: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 22:17:48.489: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:48.489: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:48.489: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 6 22:17:48.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2558 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:17:49.020: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:17:49.020: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:17:49.020: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 May 6 22:17:59.052: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 6 22:18:09.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2558 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:18:09.454: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 6 22:18:09.454: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:18:09.454: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:18:19.474: INFO: Waiting for StatefulSet statefulset-2558/ss2 to complete update May 6 22:18:19.475: INFO: Waiting for Pod statefulset-2558/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 6 22:18:19.475: INFO: Waiting for Pod statefulset-2558/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 6 22:18:29.483: INFO: Waiting for StatefulSet statefulset-2558/ss2 to complete update May 6 22:18:29.483: INFO: Waiting for Pod statefulset-2558/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 May 6 22:18:39.486: INFO: Waiting for StatefulSet statefulset-2558/ss2 to complete update STEP: Rolling back to a previous revision May 6 22:18:49.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2558 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:18:49.752: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" May 6 22:18:49.752: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:18:49.752: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:18:59.784: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 6 22:19:09.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=statefulset-2558 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:19:10.064: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" May 6 22:19:10.065: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:19:10.065: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:19:30.082: INFO: Waiting for StatefulSet statefulset-2558/ss2 to complete update May 6 22:19:30.082: INFO: Waiting for Pod statefulset-2558/ss2-0 to have revision ss2-677d6db895 update revision ss2-5bbbc9fc94 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:116 May 6 22:19:40.091: INFO: Deleting all statefulset in ns statefulset-2558 May 6 22:19:40.093: INFO: Scaling statefulset ss2 to 0 May 6 22:20:00.109: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:20:00.112: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:20:00.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2558" for this suite. • [SLOW TEST:151.680 seconds] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":48,"skipped":719,"failed":0} May 6 22:20:00.134: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":622,"failed":0} [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 STEP: Creating a kubernetes client May 6 22:15:58.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename cronjob STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/cronjob.go:63 W0506 22:15:58.297175 34 warnings.go:70] batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 STEP: Creating a ForbidConcurrent cronjob STEP: Ensuring a job is scheduled STEP: Ensuring exactly one is scheduled STEP: Ensuring exactly one running job exists by listing jobs explicitly STEP: Ensuring no more jobs are scheduled STEP: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 May 6 22:21:00.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "cronjob-5117" for this suite. • [SLOW TEST:302.058 seconds] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 ------------------------------ {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":33,"skipped":622,"failed":0} May 6 22:21:00.332: INFO: Running AfterSuite actions on all nodes {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":596,"failed":1,"failures":["[sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]"]} May 6 22:17:54.677: INFO: Running AfterSuite actions on all nodes May 6 22:21:00.391: INFO: Running AfterSuite actions on node 1 May 6 22:21:00.391: INFO: Skipping dumping logs from cluster Summarizing 6 Failures: [Fail] [sig-network] Services [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1169 [Fail] [sig-network] Services [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2497 [Fail] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 [Fail] [sig-network] Services [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2576 [Fail] [sig-network] Services [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1351 Ran 320 of 5773 Specs in 850.951 seconds FAIL! -- 314 Passed | 6 Failed | 0 Pending | 5453 Skipped Ginkgo ran 1 suite in 14m12.588038464s Test Suite Failed